Abstract ：Linear programming is a set of mathematical and computational tools , Allows you to find a specific solution to the system , The solution corresponds to the maximum or minimum of some other linear functions .

This article is shared from Huawei cloud community 《 Practice linear programming ： Use Python To optimize 》, author ： Yuchuan.

## Description of linear programming

### What is linear programming ？

Imagine , You have a system of linear equations and inequalities . Such systems usually have many possible solutions . Linear programming is a set of mathematical and computational tools , Allows you to find a specific solution to the system , The solution corresponds to the maximum or minimum of some other linear functions .

### What is mixed integer linear programming ？

** Mixed integer linear programming ** yes ** Linear programming ** An extension of . It deals with the problem that at least one variable adopts discrete integers instead of continuous values . Although at first glance, the mixed integer problem is similar to the continuous variable problem , But they have significant advantages in flexibility and accuracy .

** Integer variable ** It is important to correctly represent the number of natural integers , For example, the number of aircraft produced or the number of customers served .

A particularly important integer variable is ** Binary variables **. It can only take ** zero ** or ** One ** Value , Useful when making a yes or no decision , For example, whether factories should be built or whether machines should be turned on or off . You can also use them to simulate logical constraints .

### Why is linear programming important ？

Linear programming is a basic optimization technique , It has been used in science and mathematics intensive fields for decades . It's accurate 、 Relatively fast , Suitable for a range of practical applications .

Mixed integer linear programming allows you to overcome many limitations of linear programming . You can use piecewise linear functions to approximate nonlinear functions 、 Use semi continuous variables 、 Model logic constraints, etc . It is a computationally intensive tool , But advances in computer hardware and software make it more applicable every day .

Usually , When people try to formulate and solve optimization problems , The first question is whether they can apply linear programming or mixed integer linear programming .

The following article illustrates some use cases of linear programming and mixed integer linear programming ：

- Gurobi Optimize case studies
- Five application fields of linear programming technology

With the enhancement of computer ability 、 The improvement of algorithms and the emergence of more user-friendly software solutions , Linear programming , In particular, the importance of mixed integer linear programming increases with the passage of time .

### Use Python Linear programming

The basic method for solving linear programming problems is called ** Simplex method **, It has many variants . Another popular method is ** Interior point method **.

The mixed integer linear programming problem can be solved by more complex and more computational methods , for example ** Branch and bound method **, It uses linear programming behind the scenes . Some variations of this approach are ** Branching and cutting methods **, It involves the use of Cutting plane , as well as ** Branching and pricing methods **.

There are several suitable and well-known methods for linear programming and mixed integer linear programming Python Tools . Some of them are open source , While others are proprietary . Whether you need free or paid tools depends on the size and complexity of the problem , And the need for speed and flexibility .

It is worth mentioning that , Almost all widely used linear programming and mixed integer linear programming libraries are based on Fortran or C or C++ Native and written . This is because linear programming needs to （ Usually very large ） Matrix for computing intensive work . Such libraries are called solvers .Python Tools are just wrappers for solvers .

Python Suitable for building wrappers around native libraries , Because it can work well with C/C++ In combination with . For this tutorial , You don't need any C/C++（ or Fortran）, But if you want to know more about this cool feature , Please check out the following resources ：

- structure Python C Extension module
- CPython Inside
- use C or C++ Expand Python

Basically , When you define and solve a model , You use Python Function or method call low-level Library , The library performs the actual optimization work and returns the solution to your Python object .

A few free Python The library is designed to interact with linear or mixed integer linear programming solvers ：

- SciPy Optimization and Root Finding
- PuLP
- Pyomo
- CVXOPT

In this tutorial , You will use SciPy and PuLP To define and solve linear programming problems .

## Linear programming example

In this section , You will see two examples of linear programming problems ：

- A small problem to explain what is linear programming
- A practical problem related to resource allocation , It illustrates the concept of linear programming in real-world scenarios

You will use... In the next section Python To solve these two problems .

### Small linear programming problem

Consider the following linear programming problems ：

You need to find X and Ÿ Make red , The inequality between blue and yellow , And inequality X ≥0 and ÿ ≥0, I am satisfied . meanwhile , Your solution must correspond to z The maximum possible value of .

The argument you need to find （ In this case *x* and *y*） be called ** The decision variables **. The function of the decision variable to be maximized or minimized （ In this case *z）* be called ** Objective function **、** cost function ** Or just ** The goal is **. What you need to meet ** inequality ** be called ** Inequality constraints **. You can also call ** Equality constraints ** Use the equation in the constraint .

This is how you visualize the problem ：

The red line represents the function 2 X + Ý = 20, And the red area above it shows that the red inequality does not satisfy . Again , The blue line is a function −4 x + 5 y = 10, The blue area is prohibited , Because it violates the blue inequality . The yellow line is − x + 2 y = −2, The yellow area below it is where the Yellow inequality is invalid .

If you ignore red 、 Blue and yellow areas , Then only the gray area is retained . Each point in the gray area satisfies all constraints , Is a potential solution to the problem . This area is called ** The feasible region **, The point is ** Feasible solution **. under these circumstances , There are countless possible solutions .

You want to maximize z. Corresponds to the maximum z The feasible solution is ** Optimal solution **. If you try to minimize the objective function , Then the best solution will correspond to its feasible minimum .

Please note that ,z Is linear . You can think of it as a plane in three-dimensional space . This is why the optimal solution must be in the feasible region ** The vertices ** Or the reason in the corner . under these circumstances , The best solution is the point where the red and blue lines intersect , You'll see later .

Sometimes , The entire edge of the feasible area , Even the whole area , Can correspond to the same z value . under these circumstances , You have many of the best solutions .

You are now ready to extend the problem with the additional equality constraints shown in green ：

Equation − x + 5 y = 15, Write in green , It's new . This is an equality constraint . You can visualize the previous image by adding the corresponding green line ：

The current solution must meet the green equation , Therefore, the feasible area is no longer the whole gray area . It is the part where the green line crosses the gray area from the intersection with the blue line to the intersection with the red line . The latter point is the solution .

If you insert x The requirement that all values of must be integers , Then you get a mixed integer linear programming problem , The set of feasible solutions will change again ：

You no longer have a green line , Only along the line x Points with integer values . The feasible solution is a green dot on a gray background , At this time, the optimal dissociation red line is closest .

These three examples illustrate ** Feasible linear programming problem **, Because they have bounded feasible regions and finite solutions .

### Infeasible linear programming problem

If there is no solution , The linear programming problem is ** Unworkable **. When no solution can satisfy all constraints at the same time , This usually happens .

for example , Consider adding constraints x + y ≤ −1 What's going to happen . Then there is at least one decision variable （x or y） It has to be negative . This is consistent with a given constraint x ≥ 0 and y ≥ 0 Conflict . There is no feasible solution for such a system , Therefore, it is called infeasible .

Another example is to add a second equality constraint parallel to the green line . These two lines have nothing in common , Therefore, there will be no solution that meets these two constraints .

### Unbounded linear programming problem

A linear programming problem is ** unbounded **, If its feasible region is unbounded , The solution is not limited . This means that at least one of your variables is unconstrained , It can reach positive infinity or negative infinity , So that the goal is infinite .

for example , Suppose you take the initial question above and remove the red and yellow constraints . Removing constraints from a problem is called ** Relax ** problem . under these circumstances ,x and y It won't be bounded on the front side . You can increase them to positive infinity , So as to produce infinite z value .

### The allocation of resources

In the previous section , You studied an abstract linear programming problem independent of any practical application . In this section , You will find more specific and practical optimization problems related to manufacturing resource allocation .

Suppose a factory produces four different products , The daily output of the first product is x ₁, The output of the second product is x 2, And so on . The goal is to determine the profit of each product and maximize the daily output , Also keep the following conditions in mind ：

- The first one is 、 The second kind 、 The profit per unit of the third and fourth products is 20 dollar 、12 dollar 、40 The dollar and 25 dollar .
- Due to manpower constraints , The total number of units produced per day shall not exceed 50 .
- For the first product per unit , Consume three units of raw materials A. Each unit of second product requires two units of raw materials A And a unit of raw material B. Each unit of the third product needs one unit A And two units B. Last , Each unit of the fourth product needs three B The unit of
- Due to transportation and storage restrictions , The factory can consume up to 100 units of raw materials per day A And ninety units B.

The mathematical model can be defined as ：

Objective function （ profits ） In terms of 1 In the definition of . Human constraints follow conditions 2. For raw materials A and B The constraints of can be seen from the conditions 3 And conditions 4 By summing the raw material requirements of each product, we get .

Last , Product quantity cannot be negative , Therefore, all decision variables must be greater than or equal to zero .

Unlike the previous example , You can't easily visualize it , Because it has four decision variables . however , Whatever the dimension of the problem , The principles are the same .

## Linear programming Python Realization

In this tutorial , You will use two Python Package to solve the above linear programming problem ：

**SciPy**It's one for use Python A general package for Scientific Computing .**PuLP**It's a Python Linear programming API, Used to define problems and call external solvers .

SciPy It's easy to set up . After installation , You'll have everything you need to get started . Its sub package **scipy.optimize** It can be used for linear and nonlinear optimization .

PuLP Allows you to choose a solver and express the problem in a more natural way .PuLP The default solver used is COIN-OR Branch and Cut Solver (CBC). It is connected to... For linear relaxation COIN-OR Linear programming solver (CLP) And for cutting generation COIN-OR Cutting generator Library (CGL).

Another great open source solver is GNU Linear programming toolkit (GLPK). Some famous and very powerful commercial and proprietary solutions are Gurobi、CPLEX and XPRESS.

In addition to providing flexibility in defining problems and the ability to run various solvers ,PuLP It's not as good as Pyomo or CVXOPT And other alternatives are complex , The latter requires more time and energy to master .

### install SciPy and PuLP

To learn this tutorial , You need to install SciPy and PuLP. The following example uses SciPy 1.4.1 Version and PuLP 2.1 edition .

You can use pip Install both in the following way ：

$ python -m pip install -U "scipy==1.4.*" "pulp==2.1"

You may need to run pulptest or sudo pulptest Enable PuLP Default solver for , Especially when you use Linux or Mac when ：

$ pulptest

perhaps , You can download 、 Installation and use GLPK. It's free and open source , Apply to Windows、MacOS and Linux. Later in this tutorial , You'll see how to GLPK（ except CBC） And PuLP Use it together .

stay Windows On , You can download the file and run the installation file .

stay MacOS On , You can use Homebrew：

$ brew install glpk

stay Debian and Ubuntu On , Use apt To install glpk and glpk-utils：

$ sudo apt install glpk glpk-utils

stay Fedora, Use dnf have glpk-utils：

$ sudo dnf install glpk-utils

You may also find conda To install GLPK It is useful to ：

$ conda install -c conda-forge glpk

After installation , You can see GLPK Version of ：

$ glpsol --version

For more information , see also GLPK About use Windows Executables and Linux A tutorial for installing software packages .

### Use SciPy

In this section , You will learn how to use SciPy Optimize and find the root base for linear programming .

To use SciPy Define and solve optimization problems , You need to import scipy.optimize.linprog()：

```
>>>
>>> from scipy.optimize import linprog
```

Now you have linprog() Import , You can start optimizing .

### Example 1

Let's first solve the above linear programming problem ：

linprog() Only minimize （ Instead of maximizing ） problem , And the symbol greater than or equal to is not allowed (≥) Inequality constraints . To solve these problems , You need to modify your problem before starting optimization ：

- Not maximize z = x + 2 y, You can minimize its negative value （− z = − x − 2 y）.
- Replace the greater than or equal sign , You can multiply the Yellow inequality by -1 And get the less than or equal sign (≤) The opposite number of .

After introducing these changes , You will get a new system ：

The system is equivalent to the original system , And will have the same solution . The only reason to apply these changes is to overcome SciPy Limitations related to problem expression .

The next step is to define the input value ：

>>> >>> obj = [-1, -2] >>> # ─┬ ─┬ >>> # │ └┤ Coefficient for y >>> # └────┤ Coefficient for x >>> lhs_ineq = [[ 2, 1], # Red constraint left side ... [-4, 5], # Blue constraint left side ... [ 1, -2]] # Yellow constraint left side >>> rhs_ineq = [20, # Red constraint right side ... 10, # Blue constraint right side ... 2] # Yellow constraint right side >>> lhs_eq = [[-1, 5]] # Green constraint left side >>> rhs_eq = [15] # Green constraint right side

You put the values in the above system into the appropriate list 、 A tuple or NumPy Array ：

**obj**Save the coefficients of the objective function .**lhs_ineq**Save the inequality （ Red 、 Blue and yellow ） The left side coefficient of the constraint .**rhs_ineq**Save the inequality （ Red 、 Blue and yellow ） The right coefficient of the constraint .**lhs_eq**Save from equation （ green ） The left side coefficient of the constraint .**rhs_eq**Save from equation （ green ） The right coefficient of the constraint .

** Be careful ：** Notice the order of rows and columns ！

The order of rows on the left and right sides of the constraint must be the same . Each line represents a constraint .

The order of the coefficients from the objective function and the left side of the constraint must match . Each column corresponds to a decision variable .

The next step is to define the bounds of each variable in the same order as the coefficients . under these circumstances , They are all between zero and positive infinity ：

>>> >>> bnd = [(0, float("inf")), # Bounds of x ... (0, float("inf"))] # Bounds of y

This sentence is superfluous , because linprog() These boundaries are used by default （ Zero to positive infinity ）.

** notes ：** Contrary float("inf"), You can use math.inf,numpy.inf or scipy.inf.

Last , It's time to optimize and solve the problems you are interested in . You can do that linprog()：

>>> >>> opt = linprog(c=obj, A_ub=lhs_ineq, b_ub=rhs_ineq, ... A_eq=lhs_eq, b_eq=rhs_eq, bounds=bnd, ... method="revised simplex") >>> opt con: array([0.]) fun: -16.818181818181817 message: 'Optimization terminated successfully.' nit: 3 slack: array([ 0. , 18.18181818, 3.36363636]) status: 0 success: True x: array([7.72727273, 4.54545455])

Parameters c Is the coefficient from the objective function .A_ub and b_ub It is related to the coefficients on the left and right of inequality constraints, respectively . Again ,A_eq and b_eq Reference equality constraints . You can use bounds Provide the lower and upper limits of decision variables .

You can use this parameter method To define the linear programming method to be used . There are three options ：

**method="interior-point"**Select the interior point method . This option is set by default .**method="revised simplex"**Select the modified two-phase simplex method .**method="simplex"**Choose the traditional two-phase simplex method .

linprog() Returns a data structure with the following properties ：

**.con**Is the equality constrained residual .**.fun**Is the optimal objective function value （ If you find ）.**.message**Is the state of the solution .**.nit**Is the number of iterations required to complete the calculation .**.slack**Is the value of the relaxation variable , Or constrain the difference between the left and right values .**.status**It's a gap between 0 An integer between and 4, Indicates the status of the solution , for example 0 Time to find the best solution .**.success**It's a Boolean value , Displays whether the best solution has been found .**.x**It is a method to save the optimal value of decision variables NumPy Array .

You can access these values separately ：

>>> >>> opt.fun -16.818181818181817 >>> opt.success True >>> opt.x array([7.72727273, 4.54545455])

That's how you get optimized results . You can also display them graphically ：

As mentioned earlier , The optimal solution of the linear programming problem is located at the vertex of the feasible region . under these circumstances , The feasible area is only the green line between the blue line and the red line . The optimal solution is a green square representing the intersection of the green line and the red line .

If you want to exclude equality （ green ） constraint , Just delete the parameter A_eq and b_eq from linprog() Delete... In call ：

>>> >>> opt = linprog(c=obj, A_ub=lhs_ineq, b_ub=rhs_ineq, bounds=bnd, ... method="revised simplex") >>> opt con: array([], dtype=float64) fun: -20.714285714285715 message: 'Optimization terminated successfully.' nit: 2 slack: array([0. , 0. , 9.85714286]) status: 0 success: True x: array([6.42857143, 7.14285714]))

The solution is different from the previous case . You can see on the chart ：

In this case , The optimal solution is a feasible solution where the red and blue constraints intersect （ gray ） The purple vertex of the region . Other vertices , Like yellow vertices , With higher objective function value .

### Example 2

You can use SciPy To solve the resource allocation problem described in the previous section ：

As in the previous example , You need to extract the necessary vectors and matrices from the above problem , Pass them as arguments to .linprog(), And then get the result ：

>>> >>> obj = [-20, -12, -40, -25] >>> lhs_ineq = [[1, 1, 1, 1], # Manpower ... [3, 2, 1, 0], # Material A ... [0, 1, 2, 3]] # Material B >>> rhs_ineq = [ 50, # Manpower ... 100, # Material A ... 90] # Material B >>> opt = linprog(c=obj, A_ub=lhs_ineq, b_ub=rhs_ineq, ... method="revised simplex") >>> opt con: array([], dtype=float64) fun: -1900.0 message: 'Optimization terminated successfully.' nit: 2 slack: array([ 0., 40., 0.]) status: 0 success: True x: array([ 5., 0., 45., 0.])

The result tells you that the biggest profit is 1900 And corresponds to x ₁ = 5 and x ₃ = 45. There is no profit in producing the second and fourth products under given conditions . Here are some interesting conclusions you can draw ：

- The third product brings the largest unit profit , Therefore, the factory will produce the most .
- first slack yes 0, Means manpower （ First of all ） The values on the left and right sides of the constraint are the same . The factory every day 50 Production unit , This is its full capacity .
- The second relaxation is 40 Because the factory consumes 60 Raw materials of the unit A（ The first product is 15 Company , The third product is 45100 Company ）.
- The third margin is 0, This means that the factory consumes all 90 Raw materials of the unit B. All this is for the third product . That's why factories can't produce the second or fourth product at all , Nor can it produce more than 45 The third product of the unit . Lack of raw materials B.

opt.statusis0 and opt.successis True, It shows that the optimization problem is solved successfully , Optimal feasible solution .

SciPy The linear programming function of is mainly used for smaller problems . For larger and more complex problems , You may find other libraries more suitable for , Here's why ：

- SciPy Unable to run various external solvers .
- SciPy Integer decision variables cannot be used .
- SciPy Classes or functions that facilitate model building are not provided . You must define arrays and matrices , This can be a tedious and error prone task for large problems .
- SciPy You are not allowed to directly define the maximization problem . You have to turn them into minimization problems .
- SciPy You are not allowed to define constraints directly using the greater than or equal symbol . You must use less than or equal to .

Fortunately, ,Python Ecosystem provides several alternative solutions for linear programming , These solutions are very useful for larger problems . One of them is PuLP, You will see its practical application in the next section .

### Using PuLP

PuLP Ratio SciPy More convenient linear programming API. You don't have to modify your problem mathematically or use vectors and matrices . Everything is cleaner , It's not easy to make mistakes .

As usual , You first import the content you need ：

`from pulp import LpMaximize, LpProblem, LpStatus, lpSum, LpVariable`

Now you have imported PuLP, You can solve your problem .

### Example 1

You will now use PuLP Solve this system ：

The first step is to initialize an instance LpProblem To represent your model ：

# Create the model model = LpProblem(name="small-problem", sense=LpMaximize)

You can use the sense Parameter to select is to perform minimization （LpMinimize or 1, This is the default ） Or maximize （LpMaximize or -1）. This choice will affect the outcome of your problem .

Once you have a model , The decision variable can be defined as LpVariable Class ：

# Initialize the decision variables x = LpVariable(name="x", lowBound=0) y = LpVariable(name="y", lowBound=0)

You need to provide a lower limit ,lowBound=0 Because the default value is negative infinity . This parameter upBound Defines the upper limit , But you can omit it here , Because it defaults to positive infinity .

Optional parameters cat Define the categories of decision variables . If you use continuous variables , You can use the default value "Continuous".

You can use variables x and y Create other expressions that represent linear expressions and constraints PuLP object ：

>>> >>> expression = 2 * x + 4 * y >>> type(expression) <class 'pulp.pulp.LpAffineExpression'> >>> constraint = 2 * x + 4 * y >= 8 >>> type(constraint) <class 'pulp.pulp.LpConstraint'>

When you multiply a decision variable by a scalar or construct a linear combination of multiple decision variables , You'll get a pulp.LpAffineExpression Represents an instance of a linear expression .

** Be careful ：** You can increase or decrease variables or expressions , You can multiply them by their constant , Because the pulp class implements some Python Special method , That is, the analog and digital types are the same __add__(),__sub__() and __mul__(). These methods are used to customize the behavior of operators +,- and *.

Similarly , You can use linear expressions 、 Variables and scalars and operators ==、<=、 or >= To obtain a representation of the linear constraints of the model .LpConstraint example .

** notes ：** It is also possible to construct constraints with rich comparison methods .__eq__(),.__le__() as well as .__ge__() Defines the behavior of the operator ==,<= and >=.

Consider this , The next step is to create constraints and objective functions and assign them to your model . You don't need to create lists or matrices . Just write Python Expression and use += Operator to attach them to the model ：

# Add the constraints to the model model += (2 * x + y <= 20, "red_constraint") model += (4 * x - 5 * y >= -10, "blue_constraint") model += (-x + 2 * y >= -2, "yellow_constraint") model += (-x + 5 * y == 15, "green_constraint")

In the code above , You define tuples that contain constraints and their names .LpProblem Allows you to add constraints to the model by specifying constraints as tuples . The first element is a LpConstraint example . The second element is the readable name of the constraint .

Setting the objective function is very similar ：

# Add the objective function to the model obj_func = x + 2 * y model += obj_func

perhaps , You can use shorter symbols ：

# Add the objective function to the model model += x + 2 * y

Now you have added the objective function and defined the model .

** Be careful ：** You can use the operator to Attach constraints or goals to the model ,+= Because its class LpProblem Implemented a special method .__iadd__(), This method is used to specify act +=.

For larger problems ,lpSum() Use with lists or other sequences is usually better than repetition + Operators are more convenient . for example , You can use the following statement to add an objective function to the model ：

# Add the objective function to the model model += lpSum([x, 2 * y])

It produces the same result as the previous statement .

You can now see the complete definition of this model ：

>>> >>> model small-problem: MAXIMIZE 1*x + 2*y + 0 SUBJECT TO red_constraint: 2 x + y <= 20 blue_constraint: 4 x - 5 y >= -10 yellow_constraint: - x + 2 y >= -2 green_constraint: - x + 5 y = 15 VARIABLES x Continuous y Continuous

The string representation of the model contains all relevant data ： Variable 、 constraint 、 Target and its name .

** Be careful ：** String representation is built by defining special methods .__repr__(). of For more details .__repr__(), Please check out Pythonic OOP String conversion ：__repr__vs__str__ .

Last , You are ready to solve the problem . You can call .solve() Your model object to do this . If you want to use the default solver (CBC), Then you don't need to pass any parameters ：

```
# Solve the problem
status = model.solve()
```

.solve() Call the underlying solver , modify model object , And returns the integer status of the solution ,1 If the optimal solution is found . For the remaining status codes , see also LpStatus[].

You can get the optimization results as Properties of model. This function value() And the corresponding methods .value() Returns the actual value of the property ：

>>> >>> print(f"status: {model.status}, {LpStatus[model.status]}") status: 1, Optimal >>> print(f"objective: {model.objective.value()}") objective: 16.8181817 >>> for var in model.variables(): ... print(f"{var.name}: {var.value()}") ... x: 7.7272727 y: 4.5454545 >>> for name, constraint in model.constraints.items(): ... print(f"{name}: {constraint.value()}") ... red_constraint: -9.99999993922529e-08 blue_constraint: 18.181818300000003 yellow_constraint: 3.3636362999999996 green_constraint: -2.0000000233721948e-07)

model.objective Hold the objective function model.constraints Value , Contains the value of the relaxation variable , And the object x and y Optimal value with decision variables .model.variables() Returns a list of decision variables ：

>>> >>> model.variables() [x, y] >>> model.variables()[0] is x True >>> model.variables()[1] is y True

As you can see , This list contains the use of The exact object created by the constructor of LpVariable.

The results are consistent with your use of SciPy The results obtained are roughly the same .

** Be careful ：** Pay attention to this method .solve()—— It changes the state of the object ,x also y！

You can see which solver is used by calling .solver：

>>> >>> model.solver <pulp.apis.coin_api.PULP_CBC_CMD object at 0x7f60aea19e50>

The output informs you that the solver is CBC. You did not specify a solver , therefore PuLP Called the default solver .

If you want to run a different solver , You can specify it as Parameters of .solve(). for example , If you want to use GLPK And it has been installed , Then you can solver=GLPK(msg=False) Use... On the last line . please remember , You also need to import it ：

`from pulp import GLPK`

Now you have imported GLPK, You can use it inside .solve()：

# Create the model model = LpProblem(name="small-problem", sense=LpMaximize) # Initialize the decision variables x = LpVariable(name="x", lowBound=0) y = LpVariable(name="y", lowBound=0) # Add the constraints to the model model += (2 * x + y <= 20, "red_constraint") model += (4 * x - 5 * y >= -10, "blue_constraint") model += (-x + 2 * y >= -2, "yellow_constraint") model += (-x + 5 * y == 15, "green_constraint") # Add the objective function to the model model += lpSum([x, 2 * y]) # Solve the problem status = model.solve(solver=GLPK(msg=False))

The msg Parameters are used to display information from the solver .msg=False Disable the display of this message . If you want to include information , Then just omit msg Or set msg=True.

Your model has been defined and solved , So you can check the results in the same way as in the previous case ：

>>> >>> print(f"status: {model.status}, {LpStatus[model.status]}") status: 1, Optimal >>> print(f"objective: {model.objective.value()}") objective: 16.81817 >>> for var in model.variables(): ... print(f"{var.name}: {var.value()}") ... x: 7.72727 y: 4.54545 >>> for name, constraint in model.constraints.items(): ... print(f"{name}: {constraint.value()}") ... red_constraint: -1.0000000000509601e-05 blue_constraint: 18.181830000000005 yellow_constraint: 3.3636299999999997 green_constraint: -2.000000000279556e-05

Use GLPK The results obtained and used SciPy and CBC The results are almost the same .

Let's see which solver is used this time ：

>>> >>> model.solver <pulp.apis.glpk_api.GLPK_CMD object at 0x7f60aeb04d50>

As you defined above with the highlighted statement model.solve(solver=GLPK(msg=False)), The solver is GLPK.

You can also use it PuLP To solve the mixed integer linear programming problem . To define integer or binary variables , Just transfer cat="Integer" or cat="Binary" To LpVariable. Everything else remains the same ：

# Create the model model = LpProblem(name="small-problem", sense=LpMaximize) # Initialize the decision variables: x is integer, y is continuous x = LpVariable(name="x", lowBound=0, cat="Integer") y = LpVariable(name="y", lowBound=0) # Add the constraints to the model model += (2 * x + y <= 20, "red_constraint") model += (4 * x - 5 * y >= -10, "blue_constraint") model += (-x + 2 * y >= -2, "yellow_constraint") model += (-x + 5 * y == 15, "green_constraint") # Add the objective function to the model model += lpSum([x, 2 * y]) # Solve the problem status = model.solve()

In this case , You have an integer variable and get a different result than before ：

>>> >>> print(f"status: {model.status}, {LpStatus[model.status]}") status: 1, Optimal >>> print(f"objective: {model.objective.value()}") objective: 15.8 >>> for var in model.variables(): ... print(f"{var.name}: {var.value()}") ... x: 7.0 y: 4.4 >>> for name, constraint in model.constraints.items(): ... print(f"{name}: {constraint.value()}") ... red_constraint: -1.5999999999999996 blue_constraint: 16.0 yellow_constraint: 3.8000000000000007 green_constraint: 0.0) >>> model.solver <pulp.apis.coin_api.PULP_CBC_CMD at 0x7f0f005c6210>

Nowx It's an integer , As specified in the model .（ Technically speaking , It holds a floating-point value with zero after the decimal point .） This fact changed the whole solution . Let's show this on the chart ：

As you can see , The best solution is the rightmost green dot on a gray background . This is a viable solution for the greatest value of both x and y, Give it the maximum objective function value .

GLPK It can also solve such problems .

### Example 2

Now you can use PuLP To solve the above resource allocation problem ：

The method of defining and solving the problem is the same as the previous example ：

# Define the model model = LpProblem(name="resource-allocation", sense=LpMaximize) # Define the decision variables x = {i: LpVariable(name=f"x{i}", lowBound=0) for i in range(1, 5)} # Add constraints model += (lpSum(x.values()) <= 50, "manpower") model += (3 * x[1] + 2 * x[2] + x[3] <= 100, "material_a") model += (x[2] + 2 * x[3] + 3 * x[4] <= 90, "material_b") # Set the objective model += 20 * x[1] + 12 * x[2] + 40 * x[3] + 25 * x[4] # Solve the optimization problem status = model.solve() # Get the results print(f"status: {model.status}, {LpStatus[model.status]}") print(f"objective: {model.objective.value()}") for var in x.values(): print(f"{var.name}: {var.value()}") for name, constraint in model.constraints.items(): print(f"{name}: {constraint.value()}")

under these circumstances , You use a dictionary x To store all decision variables . This method is very convenient , Because the dictionary can store the name or index of the decision variable as a key , Corresponding LpVariable Object is stored as a value . Of a list or tuple LpVariable Examples can be useful .

The above code produces the following result ：

status: 1, Optimal objective: 1900.0 x1: 5.0 x2: 0.0 x3: 45.0 x4: 0.0 manpower: 0.0 material_a: -40.0 material_b: 0.0

As you can see , The solution and use SciPy The solutions obtained are consistent . The most profitable solution is to produce... Every day 5.0 The first product and 45.0 The third product .

Let's make this problem more complex and interesting . Suppose that due to machine problems , The factory cannot produce the first and third products at the same time . under these circumstances , What is the most profitable solution ？

Now you have another logical constraint ： If x ₁ Is a positive number , be x ₃ It has to be zero , vice versa . This is where binary decision variables are very useful . You will use two binary decision variables y ₁ and y ₃, They will indicate whether the first or third product has been generated ：

1model = LpProblem(name="resource-allocation", sense=LpMaximize) 2 3# Define the decision variables 4x = {i: LpVariable(name=f"x{i}", lowBound=0) for i in range(1, 5)} 5y = {i: LpVariable(name=f"y{i}", cat="Binary") for i in (1, 3)} 6 7# Add constraints 8model += (lpSum(x.values()) <= 50, "manpower") 9model += (3 * x[1] + 2 * x[2] + x[3] <= 100, "material_a") 10model += (x[2] + 2 * x[3] + 3 * x[4] <= 90, "material_b") 11 12M = 100 13model += (x[1] <= y[1] * M, "x1_constraint") 14model += (x[3] <= y[3] * M, "x3_constraint") 15model += (y[1] + y[3] <= 1, "y_constraint") 16 17# Set objective 18model += 20 * x[1] + 12 * x[2] + 40 * x[3] + 25 * x[4] 19 20# Solve the optimization problem 21status = model.solve() 22 23print(f"status: {model.status}, {LpStatus[model.status]}") 24print(f"objective: {model.objective.value()}") 25 26for var in model.variables(): 27 print(f"{var.name}: {var.value()}") 28 29for name, constraint in model.constraints.items(): 30 print(f"{name}: {constraint.value()}")

In addition to the highlighted lines , The code is very similar to the previous example . Here are the differences ：

**The first 5 That's ok**Binary decision variables are defined y[1] and y[3] Keep it in a dictionary y.**The first 12 That's ok**Defines an arbitrarily large number M.100 under these circumstances , This value is large enough , Because of you 100 The number of units per day cannot exceed .**The first 13 That's ok**Say if y[1] zero , be x[1] It has to be zero , Otherwise, it can be any non negative number .**The first 14 That's ok**Say if y[3] zero , be x[3] It has to be zero , Otherwise, it can be any non negative number .**The first 15 That's ok**Say either y[1]ory[3] zero （ Or both ）, So either x[1]or also x[3] It has to be zero .

This is the solution ：

status: 1, Optimal objective: 1800.0 x1: 0.0 x2: 0.0 x3: 45.0 x4: 0.0 y1: 0.0 y3: 1.0 manpower: -5.0 material_a: -55.0 material_b: 0.0 x1_constraint: 0.0 x3_constraint: -55.0 y_constraint: 0.0

The fact proved that , The best way is to exclude the first product and produce only the third product .

## Linear programming solver

Just as there are many resources to help you learn linear programming and mixed integer linear programming , There are many others with Python The solver for the wrapper is available . This is a partial list ：

- GLPK
- LP Solve
- CLP
- CBC
- CVXOPT
- SciPy
- SCIP with PySCIPOpt
- Gurobi Optimizer
- CPLEX
- XPRESS
- MOSEK

Some of these libraries , Such as Gurobi, Including their own Python Wrappers . Others use external wrappers . for example , You can see that PuLP visit CBC and GLPK.

## Conclusion

You now know what linear programming is and how to use it Python Solve linear programming problems . You also learned that Python The linear programming library is just a wrapper for the native solver . When the solver completes its work , The wrapper returns the solution state 、 Decision variable value 、 Relax variables 、 Objective function and so on .

** Click to follow , The first time to learn about Huawei's new cloud technology ~**