Team:UChicago/Design



Optizyme 2020

Description * Design * Engineering PoC


Optizyme: Algorithm Design


We will begin introducing our package with a technical explanation of its main feature: the algorithm that optimizes enzyme concentrations. Instead of formally defining and proving each mathematical concept that our algorithm uses, we will explain how our algorithm works by attempting to motivate some intuition. We would like to start out by laying out how we are looking at this optimization problem. If we have an enzyme pathway of length n, where n is the number of enzymes, then we could imagine the different enzyme concentrations could make up a list, where x1, x2, x3… xn are the concentrations of each of the enzymes. Now given each unique combination of enzyme concentrations, the system yields a certain amount of final product after a given time, and this final yield changes based on the concentrations of the different enzymes. If we look at the problem in this manner, then the final product concentration is a function of the enzyme concentrations, which we can write as: f(x1,x2,x3...xn) = y, where y is the yield for a given combination of starting enzyme concentrations. The model or simulation that our algorithm looks at effective functions as a black box that converts the input of enzyme concentrations to an output of the final yield.

This function is a function that exists in n+1 dimensions, so it is not visualizable in general, but we will attempt to demonstrate how we use this function to figure out what the optimal ratio of enzymes is. A gradient vector is a generalization of a one dimensional derivative, and its components are the partial derivatives of the function with respect to each of the inputs. For the function we have detailed above, the gradient of our function is , where the notation fx1 indicates the partial derivative of the function with respect to the first input. The gradient vector points in the direction of greatest increase of the function near the point it is calculated at, so the opposite direction of the gradient vector is the direction of greatest decrease of the function. The implication of this property is that by calculating the gradient at the point we are currently at, and then by taking a step in the opposite direction of the gradient, we are guaranteed (if the step is small enough) to take a step that results in a decreased value of f, and by taking a step in the direction of the gradient, we are guaranteed (if the step is small enough) to end up at a new point that results in an increased value of f. Effectively, this kind of optimization “follows” the gradient vector of the function and uses this property to find minimum and maximum points. The algorithm that we have implemented is the minimization version of this algorithm and belongs to a class of algorithms known as gradient descent.

An analogy may make the algorithm easier to understand. Imagine you are walking around in a foggy forest, where you can’t see around you (we’ll also imagine this forest has many dips, hills, and ridges), and you are trying to reach the lowest point in the forest. The only tool you have is a compass, but instead of telling you which way is north and which way is south, the compass points in the direction of greatest decrease in altitude. To find your way to a lower point, you would look at the compass, then take a step in the direction it is pointing, look at the compass again, and then take a step in the new direction it is pointing. In gradient descent, the function is the landscape you are walking on, and the gradient vector is the compass that tells you which direction to go.

All this is well and good, but what happens if we end up in a small ditch (imagine a bowl shaped portion of terrain)? At the center of this ditch, we will be at the lowest point near us, and the gradient vector will be something called a zero vector, meaning it does not point in any direction. This should intuitively make sense, because the bottom of the ditch is flat, meaning the gradient, which is a generalization of a derivative, could certainly be zero. At this point, gradient descent will not move us from the bottom of the ditch, because this is the lowest point in the vicinity.

Let us now imagine that we are still stuck in the ditch, but there is a ditch next to the one that I am stuck in that is actually lower. In this case, the lowest point in the forest would be the ditch next to us, but we are stuck in a different ditch. To work around this issue, we explore the terrain with not just one person, but multiple. We have each of our searchers start in a different portion of the terrain, so our gradient descent compass will take us through different paths and to different local minima, of which one is hopefully the global minima, but this is not guaranteed.

In our gradient descent algorithm, random starting points are generated, but some of these random points don’t make sense. For example, if one of the random points started up in the mountain near the forest, this would be a bad starting point to find the lowest point in the forest. Similarly, starting points above a certain threshold in the algorithm are deemed bad starting points and thrown out, this prevents the algorithm from wasting computational power searching concentrations of enzymes that are unlikely to be productive. The specific threshold is determined by an inputted reasonable guess for enzyme concentrations (usually a 1:1 ratio should suffice, the purpose of this threshold is not to accurately find the answer but to throw out horrible starting points). This feature of our algorithm can be considered Monte Carlo sampling of starting points with a rejection criterion.

Something important to our algorithm is the total enzyme concentration that will be used as the constraint for the enzyme concentrations searched. The constraint is important because it allows the end user to determine what maximum level of total enzyme concentration is economically feasible, and the algorithm can search within these constraints. How the algorithm works with this constraint is intimately bound to linear algebra of an arbitrary number of dimensions, but the intuition is most easily imparted through a simple example.

Say we have an enzyme pathway with only two enzymes in it, and our maximum concentration is 1 (the units are arbitrary in this example). We represent the enzyme concentrations with x and y, and remembering that our final yield is a function of the input concentrations, we can write our function as f(x,y). Looking at only the inputs with the constraint, we know that x+y=1 based on the constraint given. Plotting x+y=1 gives a line, which is perpendicular to the vector <1,1>. In other words, all vectors connecting points within the line are perpendicular to the vector <1,1>.

Now if we consider an enzyme pathway with three enzymes, with the same constraint, we now represent the function f(x,y,z). Looking at only the inputs with the constraint, we know that x+y+z=1 must be true. Plotting this, we see that x+y+z=1 is a plane where vectors connecting points within the plane are perpendicular to the vector <1,1,1>.

Pathways with more than three enzymes can’t be visualized, but the rules of linear algebra allow us to extend the ideas of perpendicularity (orthogonality) into higher dimensional relationships. The constraints, regardless of the total enzyme concentration that is chosen, is a plane in an arbitrary dimension, called a hyperplane, where vectors connecting points within the hyperplane are perpendicular to the vector <1,1,1…> with as many entries as there are enzymes in the pathway.

If we think about three dimensional geometric space that is represented by the x, y, and z axes, it is easy to see that if we define vectors <1,0,0>, <0,1,0>, <0,0,1>, which is one step in the x, y, and z directions, respectively, that we can reach any point in three dimensional space by taking some combination of these steps. For example, if we wanted to reach the point <1,2,3> we take 1 step in the x direction, 2 steps in the y direction, and 3 steps in the z direction. However, the types of steps we take don’t have to be exactly in the x, y, and z directions. If we defined new vectors such that each one was perpendicular to the others, these new “steps” that we are allowed to take would also allow us to reach every point in three dimensional space. A key idea here is if our steps only exist in three dimensions, we can never leave the three dimensional space and enter four dimensions the way we could if our vectors were four dimensional.

An important thing about hyperplanes is that they are something called subspaces, which means that they behave similarly to three dimensional geometric space in that every point in the hyperplane can be reached by taking a combination of steps. By defining these steps to exist only in the hyperplane, rather in the space of n dimensions, where n is the number of enzymes, we can force our algorithm to take steps only in the hyperplane rather than through all space. By redefining the types of steps that are allowed, we can force our gradient descent algorithm to take steps only inside the hyperplane that fulfills the given constraint, rather than through all of an n dimensional space. These steps are called “basis vectors” and are determined through processes called “orthonormalization algorithms”. A very common orthonormalization algorithm is called Gram-Schmidt orthonormalization.


Differential Equation Framework for Kinetic Models

Next, we will address the design behind how our package generates the system of differential equations that describes the time-course of the system from the enzyme mechanisms and kinetic parameters. We will lay out which interactions are responsible for which terms in each differential equation, and we intend to not only describe how our code generates differential equations to build a model, but also to describe the mathematical formulation of kinetic modelling in enough detail so that users of Optizyme will be able to construct their own kinetic models. Think of this as a SparkNotes guide to kinetic modeling.


Guide to Kinetic Modeling

We begin by introducing the one substrate steady state Michaelis-Menten equation. This equation makes the assumption that the association of enzyme to substrate is significantly faster than the catalytic action itself.

Here, the enzyme reaction rate is defined using the following values. Vmax represents the turnover rate of the enzyme multiplied by the total enzyme concentration, which are Kcat and [E1], respectively. [S1] represents the concentration of substrate 1, and Km1 is the Michaelis constant associated with substrate 1. Conceptually, the turnover rate is how fast the enzyme works, and is usually defined in units of substrate molecules processed per unit time. The Michaelis constant is the substrate concentration at which the enzyme works at half its maximum rate. It is determined experimentally in kinetic experiments, and is readily available for many enzymes on databases such as BRENDA. Methods of determining the Michaelis constant, as well as the derivation of the parameter, are well documented in any introductory biochemistry textbook and can be consulted if necessary. If there are competitive inhibitors to the substrate, then the equation is modified as follows:

If competitive inhibition is present on the enzyme, then the additional terms that arise are the concentrations of the competitive inhibitors (I), as well as the competitive inhibition constants. The inhibition constant is the thermodynamic dissociation constant of the inhibitor from the enzyme and is determined experimentally by varying inhibitor concentration and best fitting the equation parameters to the resulting graph. Intuitively, what effect that these inhibition constants have is to increase the concentration of Km, which increases the concentration required to have the enzyme function at half its maximum rate, thus reducing the efficiency of the enzyme. The equation from above is rearranged to show this interaction:

A key takeaway is that even though Km increases, if the substrate concentration ([S1]) increases sufficiently, then the maximum turnover rate can still be achieved.

In the presence of noncompetitive inhibition, the additional terms that arise are the concentrations of the noncompetitive inhibitors, as well as their respective inhibition constants. Again, these inhibition constants are thermodynamic dissociation constants of the inhibitors, and are determined experimentally. However, noncompetitive inhibitors do not make their impact by increasing Km. Rather, they reduce the efficiency of the enzyme by decreasing Vmax. Because Vmax is decreased, even if substrate concentration is increased the enzyme can’t achieve its normal Vmax speed, and is inhibited. The equation representing an enzyme with noncompetitive inhibition is given below.

If an enzyme has both competitive and noncompetitive inhibition, then the equation that models the enzyme’s rate is:

Now that we have gone through the intuition for which interactions are responsible for which terms in the rate equations, we will present the equations that model enzymes with two substrates. For a bisubstrate enzyme with no competitive or noncompetitive inhibition, and whose substrates can bind in any order, the rate equation modelling the catalytic activity of the enzyme is:

If the enzyme exhibits competitive inhibition, then the rate equation is:

Note that in the rate equation above, the terms I1 and I2 refer to the competitive inhibitors for substrate 1 and substrate 2, respectively. The Ki terms are the corresponding dissociation constants for each of the inhibitors. If the enzyme exhibits both competitive and noncompetitive inhibition, then the rate equation is:

In the rate equation presented above, the new term I3 refers to the noncompetitive inhibitors of the enzyme.



Works Referenced

1. Berg, J. M. The Michaelis-Menten Model Accounts for the Kinetic Properties of Many Enzymes. https://www.ncbi.nlm.nih.gov/books/NBK22430/ (accessed Oct 27, 2020).
2. Shen, L.; Kohlhaas, M.; Enoki, J.; Meier, R.; Schönenberger, B.; Wohlgemuth, R.; Kourist, R.; Niemeyer, F.; Niekerk, D. van; Bräsen, C.; Niemeyer, J.; Snoep, J.; Siebers, B. A combined experimental and modelling approach for the Weimberg pathway optimisation. https://www.nature.com/articles/s41467-020-14830-y (accessed Oct 27, 2020).


UChicago GeneHackers

Computationally optimizing biosynthetic pathways