Team:Vilnius-Lithuania/Software

Overview

Lateral flow assay (LFA) tests are becoming more popular each year because of its ability to rapidly and cheaply detect small amounts of analyte1. However, straightforward questions, for instance, the appropriate concentration of capture sites, optimal test line location, or sufficient amount of sample volume can be answered only by experimentation2. Thus, the development of LFA tests is very tedious work and can exhaust the most valuable resources - time and money.

Although LFA strip tests are widely used in the healthcare industry, home care and in the monitoring of the water quality and food supply, software that would facilitate the development of such tests are not freely available on the market3. After we started to plan our experiments, it became hard to decide where to put control and capture probes. Also the optimal reagent concentrations should be known. Such problems are usually faced by anybody who has ever tried to design a LFA strip test. It would be desirable and convenient to have a sophisticated and easy to use tool that could rapidly and inexpensively predict various parameters of LFA. To solve this problem, our team presents a novel software tool - onFlow.

To implement this, we utilized the classic Waterfall engineering model. Even though more advanced models that use iterative approaches exist, we chose this one because it is the most appropriate choice for projects that are well defined, their requirements are clear and fixed, the duration of the project is short and the idea is concise. The waterfall model describes five main steps of the software life cycle: communication, planning, modelling, construction and deployment, all of which will be described in the further sections.

Figure 1. Diagram of the waterfall software development process model.

Communication

The first stage in designing any type of software is to gather all the requirements and specify what features will be implemented.

Since onFlow is a web based user interface of our mathematical model on LFA parameters, it requires only a few important features - fields for user inputs, API for front end and back end communication and clear output of results. We also added a feature that allows users to search for the values of several parameters in an external database and alleviates the workload even more.

As for the requirements, there were only a few - onFlow had to be intuitive and quick. Besides these, the most important requirement for the software was clear and easy-to-understand documentation of different parameters, utilized in the mathematical model of the system. Keeping these requirements and features in mind, we continued on to the planning stage of the engineering cycle.

Planning

Second stage of the software development life cycle usually involves three main steps: estimating the cost and schedule of the project as well as deciding on progress management principles. Firstly, we estimated the cost, which consisted of two parts - website domain registration and buying Virtual Private Server (VPS) for hosting onFlow (it was also used for ‘The 6th SynBio Sense’). Secondly, we planned our schedules to fit into the deadline of Wiki Freeze - October 27th. Since we came up with the idea for the software in mid-August, we had two months to finish our project. We distributed different tasks (e.g. front end, back end, external database integration, etc.) amongst ourselves and worked in parallel to speed up the process. Lastly, to keep track of our progress we held weekly meetings where we discussed what was done in that time period and what tasks await us in the next one. These steps helped us to ensure that onFlow would be delivered on time.

Modelling

We divided our software architecture into two main parts - the back end, which is the logical component, and the front end, the graphical user interface. While the software front end is only used for taking user inputs and showing the results, the back end does all the ‘heavy lifting’ - obtaining suitable strip test design and sending API requests to the KOFFI external database. In this section we will describe the thought process of designing the back end structure of onFlow.

Figure 2. Diagram showing the sequence of operations in the software.

The whole data flow of the software is displayed in the diagram (Fig. 2). Most of the steps are straightforward. It must be noted that steps 1-4 are optional, because the user can input the association and dissociation rates themselves.

The back end and front end split is beneficial, because we can test the components individually and they can be reused later. The back end includes two API endpoints: one for accessing the rates from KOFFI DB actual calculation interface.

Mathematical model

Not every LFA user is able to understand Partial Differential Equations (PDEs), not to mention correctly formulate variational problems and simulate the whole nonlinear PDEs system. Therefore, as great agreement was met with our wet lab results, we decided to fill this gap by integrating our LFA mathematical model in the software.

We chose to implement the model using the Python-friendly computational finite element method platform FEniCS. This was done because it is open-source, has a Python interface, and its back-end is implemented in C++, as a result it does not compromise speed. The model simulates a partial differential advection-convection-reaction equation system and estimates the optimal test line location.

In the mathematical modeling section we describe the mathematical background of LFA, the problems that we have faced and the way how our modeling helped us to answer questions related to LFA strip design and facilitated the development process. Details can be found in the modelling page.

Construction
Back End

Briefly, at the initial user request the software’s back end uses FeniCS platform to generate two simulations that are based on PDEs. The first simulation evaluates the test line’s optimal location on the strip, which is the place where a sufficient amount of analyte-detection probe complex is formed. The second simulation then uses results from the first simulation to calculate the duration of the test and the minimum volume of the analyte, thus the amount of analyte needed to obtain sufficient sensitivity on the test line. Therefore, our software onFlow allows users to vary different lateral flow assay parameters such as capillary flow time, analyte concentration, probe concentration or affinity constants, in order to obtain suitable strip design, optimize reagent consumption and decrease the timeline and cost.

However during the development of our model and software we noticed that manual or experimental search of kinetic constants, which are important parameters for the lateral flow assay model, could take a lot of precious time. Therefore, we decided to broaden the scope of our software by incorporating the ability to extract the kinetic constants from recently published platform4 that stores binding rates (association, dissociation) in a novel database5. In short, our software can use KOFFI platform’s API to search for user defined dissociation or association rates of biomolecular interactions. In this way users can find the necessary kinetic constants of biomolecular interactions between DNA, RNA, proteins and chemical compounds that are necessary for LFA development. However, if such parameters are not available in the KOFFI platform’s database, the users can either use default values or specify their own binding parameters.

Video 1. A screen recording of a user retreiving rates from the KOFFI database using our software.

The overall structure of our back end is not complicated: it is split into two python scripts that are run consecutively. First, the test line location simulation is run, to find the optimal distance. Then, the second simulation is run with results from the first simulation, to determine the optimal concentration of the signal. The results are then sent to the front end.

Front End

onFlow uses quite a lot of different parameters in order to make a rapid and accurate prediction of results. The amount of these parameters might be overwhelming for the user, therefore we had a goal to make our UI user-friendly as much as it was possible. To implement this, we documented each parameter not only with their descriptions but also short animations that visualise how the parameter fits into the whole system. This ought to help users to navigate the software easily even if it is their first time using complex mathematical modelling software such as this one.

Video 2. A screen recording of a user inspecting the built-in documentation of the parameters.

For the front end of our software we used a modern Javascript library React. It allowed us to create an easily scalable and fast user interface while also maintaining our teams’ design brand.

Experimental Validation

The validation of our software is tightly interconnected with our mathematical modeling that is in more details described in the modeling page.

In short, to validate our software, we made plenty of pilot experiments to examine our strip test design. First of all, we tried to determine the minimum amount of analyte that is necessary to obtain visible results on the test line in a fixed amount of time. Serial dilutions of known analyte were performed and afterwards samples were applied on the sample pad. Photographs of strip tests were taken over a period of 20 minutes.

Video 3. Timelapse of F. psychrophilum LFA test (lower) and control (higher) lines development using serial dilutions of DNA. Concentration of analyte from left to right: 500 nM, 250 nM, 125 nM, 62.5 nM, 31.25 nM, 15.6 nM, 7.8 nM, 3.9 nM, 1.95 nM, 0.98 nM, 0.49 nM, 0 nM.

The minimal amount of analyte the strip test could detect was ~10 nM. The signal is clearly visible from around 62.5 nM.
Several more experiments were made to test the sensitivity of the lateral flow assay and the results were qualitatively predicted by the formula
\begin{equation}\label{empirical} \dot{c} = k_1 \frac{(a_0-c)^n (p_0 - c)}{(k p_0)^n + (a_0 - c)^n} - k_{-1} c \end{equation} explains experimental results, with \(k_1 = 10^{-3}\), $k_{-1} = 10^{-4}$, $n = 2$, $k=2$.
Where the results were visible (invisible) if the corresponding trajectories were above (respectively, below) the upper or lower threshold lines. In the intermediate region the results were inconclusive.

Figure 3. Threshold determination with our qualitatively predicted formula for minimal visibility of test line. Below green horizontal dashed line – no signal. Above orange dashed line - signal is visible. In the middle of the green and orange dashed line the result is variable or indeterminate. More details and more simulations are in the modeling section with explanation.

The next important experiment we performed was the flow rate determination from experimental data. The flow rate of the strip test is usually provided by manufacturer, however we noticed that the flow rate is not a constant value. Therefore, we decided to fit two different empirically derived flow rate models to our experimentally measured flow rate data at different time points.

Video 4. Flow rate determination. Sample volumes ranging from 50 uL to 110 uL.

Video 5. Flow rate determination. Sample volumes ranging from 50 uL to 200 uL.

This experiment also provided the important data of optimal sample volume which has to be applied on the sample pad to obtain the most suitable flow rate. More precisely, volumes less than 60 microliters were not enough to percolate into the membrane, and those at 150 microliters spilled over. While the velocity increases along with larger applied volumes, it was robust to changes in the range of 60-100 microliters.

Figure 4. Fitted time-series data for the lateral flow velocity. Different buffer concentrations (50, 60, 70, 80, 90, 100, 110 $\mu L$) were applied onto strip membranes and set next to a ruler. The experiment was recorded with a video camera. The result shows a clear exponential fit. The velocity was robust to changes in volume in the range 60-100 $\mu L$, but started increasing afterwards. 50 $\mu L$ test was discarded because the solution never permeated into the strip membrane.

We then used the fitted flow rate as for 100 microliter volumes as a default flow rate value in onFlow software. However, the user can perform his own experiments to change the optional C and r parameters.
The proposed velocity functions were \begin{equation} V(t) = Ce^{-rt}, \ V(x) = \frac{C}{x}, \end{equation} corresponding to different wetting front profiles: \begin{equation} x(t) = \frac{C}{r}(1 - e^{-rt}), \ x(t) = \sqrt{2Ct}. \end{equation} Of these the exponential velocity profile had the best fit and was subsequently used in the software.

After collecting the information of our strip test sensitivity and flow rate, we tried to validate the test line location that we obtained from our modeling, with the software onFlow. By inserting the parameters of our experimental system into onFlow, the software predicted the optimal test line location such that a sufficiently strong signal would be visible as quickly as possible.

To make sure that this location is the optimal one, we examined 2 more locations that are 0.5 cm further and 0.5 cm closer to the location obtained from the simulation. 100 nM, 25 nM, 6.25 nM solutions were dipped into three strips with test line locations at 0.5 cm, 1 cm (optimal for 25 nM), and 1.2 cm. The hypothesis is that the middle test line location (simulated from onFlow) would obtain a more intense signal than the closest test line, and the most distant test line signal will not differ much from the optimal location determined from software.

Video 6. Optimal test line validation. To make sure that this location is the optimal one, we examined 2 more locations that are 0.5 cm further and 0.5 cm closer to the location obtained from the simulation. 100 nM, 25 nM, 6.25 nM solutions were dipped into three strips with test line locations at 0.5 cm, 1 cm (optimal for 25 nM), and 1.5 cm. Here the strip tests are in groups of three. The first group's strip is dipped into analyte solution with a aconcentration of 100 nM, second - 25 nM, third - 6.25 nM, and fourth - 0 nM. The first strip in each group has a test line at 1.5 cm, the second strip test line is at 1 cm and the third strip is at 0.5 cm. The hypothesis is that the middle test line location (simulated from onFlow) would obtain a more intense signal than the closest test line, and the most distant test line signal will not differ much from the optimal location determined from software.

In Video 6 we can see that the location determined from the software is becoming more visible in a shorter period of time. Knowing that we could happily move forward to another step – the validation of other parameters: the amount of time to get a sufficiently visible test line and the quantity of analyte that is needed to obtain the most visible results.

To test this part of the software, we have chosen the optimal test line location obtained from first simulation and suitable analyte concentrations obtained from sensitivity experiments (100 nM, 25 nM, 7nM). Our software correctly predicted the time required to obtain a signal (~6 min). Unfortunately, it did not correctly predict the quantity of analyte, thus we decided to remove this parameter from software temporarily. However, empirically, from experimental data, the derived equation predicts if the signal will be visible on the test line at a chosen analyte concentration.

We plan to incorporate this empirical equation in software in the future after more detailed experiments will be performed. The software allows the user to use both options: use the empirical formula and the PDE estimation. We hope to improve the software tool in the future, including finding the optimal quantity of analyte from the PDE, and publish the results. All in all, both our modeling and software helped us develop more robust and more sensitive strip tests. The resulting strip tests will help identify and diagnose substances and diseases faster and more reliably.

Conclusion: We are confident that our software tool will be extremely useful to other future iGEM projects since lateral flow assay is getting more and more attention every year in the scientific community as it could be implemented in resource-limited settings, mostly for public health in developing countries6. The development of LFA test strips is experiencing a renaissance that could be explained by the high research activity in paper-based microfluidics in recent years1. Ultimately, we believe that from this day iGEM teams will use LFA more frequently.

References

  1. Cate, D. M., Adkins, J. A., Mettakoonpitak, J. & Henry, C. S. Recent Developments in Paper-Based Microfluidic Devices. Anal. Chem. 87, 19–41 (2015).
  2. Berli, C. L. A. & Kler, P. A. A quantitative model for lateral flow assays. Microfluid Nanofluid 20, 104 (2016).
  3. Qian, S. & Bau, H. H. A mathematical model of lateral flow bioreactions applied to sandwich assays. Analytical Biochemistry 322, 89–98 (2003).
  4. Norval, L. W. et al. KOFFI and Anabel 2.0—a new binding kinetics database and its integration in an open-source binding analysis software. Database 2019, (2019).
  5. KOFFI-DB at http://koffidb.org/.
  6. Yager, P. et al. Microfluidic diagnostic technologies for global public health. Nature 442, 412–418 (2006).