Technical approach and mathematical formulation
The well placement optimization problem involves selecting the best locations to drill new wells from a set of potential locations to maximize the expected Net Present Value (NPV) while minimizing risk.
We formulate the well placement problem as a mean-variance portfolio optimization:
\[ \max_{\mathbf{x}} \left\{ \mathbf{r}^T \mathbf{x} - \lambda \mathbf{x}^T \mathbf{Q} \mathbf{x} \right\} \]
subject to:
\[ \sum_{i=1}^{n} x_i = N \]
\[ x_i \in \{0,1\} \quad \forall i \in \{1,2,\ldots,n\} \]
where:
We use Gaussian Process (GP) regression to model the spatial distribution of well productivity across the reservoir. This allows us to capture the correlation structure between different locations.
Our framework supports multiple variogram models to capture spatial correlation:
RBF Kernel:
\[ k(x_i, x_j) = \sigma^2 \exp\left(-\frac{||x_i - x_j||^2}{2l^2}\right) \]
Exponential Kernel:
\[ k(x_i, x_j) = \sigma^2 \exp\left(-\frac{||x_i - x_j||}{l}\right) \]
The GP model provides the mean vector \(\mathbf{r}\) and covariance matrix \(\mathbf{Q}\) needed for the optimization problem.
The mean-variance portfolio optimization is formulated as a Quadratic Knapsack Problem (QKP), which is an extension of the classic knapsack problem with quadratic terms in the objective function.
import gurobipy as gp
from gurobipy import GRB
def solve_qkp(mean_vector, cov_matrix, n_select, lambda_value):
n = len(mean_vector)
# Create model
model = gp.Model("well_placement_qkp")
# Add binary decision variables
x = model.addVars(n, vtype=GRB.BINARY, name="x")
# Set objective: maximize mean - lambda * variance
objective = gp.quicksum(mean_vector[i] * x[i] for i in range(n))
# Add risk term
for i in range(n):
for j in range(n):
objective -= lambda_value * cov_matrix[i,j] * x[i] * x[j]
model.setObjective(objective, GRB.MAXIMIZE)
# Add constraint on number of wells to select
model.addConstr(gp.quicksum(x[i] for i in range(n)) == n_select)
# Solve
model.optimize()
# Return selected indices
if model.status == GRB.OPTIMAL:
selected = [i for i in range(n) if x[i].X > 0.5]
return selected
else:
return None
For large-scale problems, we employ Randomized Singular Value Decomposition (RSVD) to create a low-rank approximation of the covariance matrix, significantly reducing computational requirements.
The full covariance matrix \(\mathbf{Q}\) is approximated as:
\[ \mathbf{Q} \approx \mathbf{U}_r \mathbf{\Sigma}_r \mathbf{U}_r^T \]
where \(\mathbf{U}_r\) contains the first \(r\) left singular vectors and \(\mathbf{\Sigma}_r\) is a diagonal matrix with the largest \(r\) singular values.
This allows us to reformulate the quadratic term as:
\[ \mathbf{x}^T \mathbf{Q} \mathbf{x} \approx \mathbf{x}^T \mathbf{U}_r \mathbf{\Sigma}_r \mathbf{U}_r^T \mathbf{x} = \mathbf{y}^T \mathbf{\Sigma}_r \mathbf{y} \]
where \(\mathbf{y} = \mathbf{U}_r^T \mathbf{x}\).
SVD Approximation Error vs. Rank
Visualization of approximation error
The complete optimization workflow involves the following steps:
Key parameters to tune in the optimization process:
Parameter | Description | Impact |
---|---|---|
λ (lambda) | Risk aversion parameter | Higher values prioritize risk reduction over expected returns |
r (RSVD rank) | Target rank for approximation | Lower ranks improve computational efficiency at the cost of accuracy |
Variogram parameters | Correlation length, sill, etc. | Affects the spatial correlation structure and uncertainty quantification |