Understanding Different Methods
There are several numerical methods available for solving matrix equations, each with its own advantages and disadvantages. For instance, Gaussian elimination is a common method but is known to be numerically unstable without pivoting [2:1]. QR decomposition is another method that can be used for solving systems of linear equations and finding the inverse of matrices. It is often considered more stable than Gaussian elimination
[2:2]. Lagrange multipliers can also be applied in optimization problems involving matrix equations, particularly when dealing with quadratic forms
[1:3].
Computational Efficiency and Stability
When comparing numerical methods, it is important to consider both computational efficiency and numerical stability. The flop count, which measures the number of floating-point operations required, can provide insight into the computational cost of a method [2:1]. Additionally, numerical stability is crucial, as some methods may produce inaccurate results due to rounding errors or ill-conditioned matrices. For example, the use of direct search methods like Nelder-Mead can be advantageous when derivatives are not available, although they might be less efficient compared to derivative-based methods like BFGS
[4:3].
Practical Considerations
In practice, the choice of method often depends on the specific problem being solved. For small matrices, methods like calculating the inverse using determinants and cofactors may suffice, but they become inefficient for larger matrices [2:3]. In many real-world applications, it is advised to avoid explicitly computing the inverse of a matrix if possible, as this can lead to unnecessary computational overhead and potential inaccuracies
[2:1]. Instead, solving the system directly using techniques like LU decomposition or iterative methods can be more efficient.
Tools and Resources
For those new to numerical methods, leveraging computational tools is highly recommended. Software packages such as SciPy offer robust implementations of various numerical algorithms, making it easier to perform complex calculations without delving into the intricacies of each method [4]. Additionally, online calculators and smartphone apps can assist with basic matrix operations, freeing up time to focus on understanding the underlying concepts
[5:7].
Maybe a bit embarrassing to ask but my exposure to numerical methods is limited so far. I've been trying to develop my own finite solver for me to learn more about how it all works and I've been reading what other people have done but one method captured by attention but I'm stumped on what it is. I've attached the photos below.
I've searched everywhere hoping to find a paper or something online that describes this method but no luck. The Lagrange Multipliers I'm finding online aren't related to what's covered here, since everything I'm finding is related to optimization. So what exactly is this method called, and is it worth exploring it?
Edit: thank you for the very detailed responses! they all pointed me to the right direction
I'm not sure on the specifics, as this particular topic is outside of my knowledge, but in general, this appears to be Lagrange's method of undetermined coefficients, it's usually used in optimisations.
This is Lagrange multipliers used on a positive semidefinite quadratic form with matrix notation. That's what you should Google. If you search for "capon beamformer Lagrange multipliers" or "MVDR Lagrange multipliers" I am confident you will find a fairly detailed, step by step solution somewhere on the web. I've seen it, but I don't have a reference handy. It's out there. Bear in mind that MVDR is just one example of such a problem. It's not the same as what you're looking at, but it's close enough not to matter a whole lot.
They are just using matrices for all their notation. If they weren't doing that, it'd be ordinary calc 3.
Eigen vector/values
Pretty sure it is the regular lagrange multipiers, lamba is just a vector, to make Cu into a scalar, if you differentiate with respect to u's components and lambd's components you'll get the same system
Eq. 9.2 appears frequently in the numerical optimization and nonlinear programming literature. The specific equation is formulated/solved as a quadratic program and constructing eq. 9.3 is one approach. I believe, modern linear solvers can operate directly on the symmetric indefinite matrix to arrive at solutions to the variables (displacements in this case) and the Lagrange multipliers.
You say you’re trying to develop your own solver. Is this just a learning exercise? I would review the literature on symmetric indefinite matrix equation solvers. If you have access to back issues of SIAM journals, start there. Otherwise, if you’re just trying to get the job done, your time might be better spent looking for high quality computer solutions that are available.
Edited to include: look at routines MA21 and MA57 from the Harwell Subroutine Library. I.A Duff
Thanks for the input.
It's something as a side project for grad school, hopefully to build my skills for interviewing. End goal is to develop it into something pretty elaborate but I'm at the starting stages.
I'll take a look at those articles.
Hello everyone! I am researching the use of different numerical methods for calculating A−1 when with A a square matrix of a certain size (eg. 3x3). I would like to share my methods with you and seek your opinions.
Method 1: Use of the formula A−1=Com(A)T/det(A)
Method 2: Resolution by the Gauss algorithm of the unknown system X∈M10,1(R), where (a1,...,a10) are arbitrary (S)A⋅X=a1...a10
Method 3: Resolution of system (S) by triangularizing A.
Method 4: Resolution of system (S) using the QR form from the course on Euclidean spaces (we saw and programmed it in td-tp).
We don't use ready-made system resolution functions in Python, reduction function, the determinant calculation function.
How to compare different methods: we compare the number of basic operations (+,-,x,/) of each method. As we don't know how many calculations inside the det(), eigenvects(), etc, we don't allow using them and only use the most basic calculation.
I need to think about it.
My feeling is that QR form is the best but I'm not entirely sure - need to check.
Also I guess this should be valid for any m x n matrix or only square matrices or only 3 x 3 matrices?
For instance if it is a 3 x 3 matrix then probably even method one is great.
Thank you for your interesting idea! Maybe we can fix one size (3x3 is a little bit small, maybe we can set 10x10)
I don't know what sort of guidance you're looking for. If you're just comparing flop counts, the analysis should be pretty straightforward.
Another thing to consider is numerical stability. For instance, Gaussian elimination (without pivoting) is known to be numerically unstable.
Having said all this, the best advice is probably "Don't compute the inverse!" It may surprise you to know that the cardinal rule of numerical linear algebra is "Never invert a matrix."
Hi, I'm trying to solve the KdV equation with the Crank-Nikolson Scheme and I'm trying to follow the method in this document (pg4). I am getting confused on how to iterate my loops because of all of the different indices and keep track of values etc. If anyone could give any advice, that'd be wonderful. Thank you! :)
I assume you saw the code at the bottom? Honestly, I'd love to be able to help further than that, but it's waaay above my head. Best of luck.
Haha yes! That's completely fine, thank you for looking!!
Hello guys,
As a first disclaimer, and maybe as a first question, I know very little about optimization, so probably the doubts in this post will come from my lack of knowledge. Is there any standard reference to introduce me in the different optimization methods?
The main point of this post is that I'm performing a minimization of an objective function using the scipy.optimize.minimize package.If I use BFGS it says it performs 0 iterations, if I use Nelder-Mead, it iterates but the final value is the initial condition. I can't figure if it is code problem or concepts problem. Here is my code if it helps:
import numpy as np
import scipy.sparse as sp
import scipy.optimize as so
from import_h5py import K_IIDF, K_IBD, K_IBD_T, K_BBD
from grids_temperaturas import C_RD_filtrada
from leer_conductors import conductividades
def construct_KR(conduct_dict):
"""
Convierte un diccionario de conductividades a matriz sparse.
Args:
conduct_dict: Diccionario donde las claves son tuplas (i,j) y los valores son conductividades.
Returns:
sp.csc_matrix: Matriz sparse en formato CSC.
"""
if not conduct_dict:
return sp.csc_matrix((0, 0))
# Encontrar dimensiones máximas
max_i = max(k[0] for k in conduct_dict.keys())
max_j = max(k[1] for k in conduct_dict.keys())
n = max(max_i, max_j) # Asumimos matriz cuadrada
# Preparar datos para matriz COO
rows, cols, data = [], [], []
for (i, j), val in conduct_dict.items():
rows.append(i-1) # Convertir a 0-based index
cols.append(j-1)
data.append(val)
# Crear matriz simétrica (sumando transpuesta)
K = sp.coo_matrix((data + data, (rows + cols, cols + rows)), shape=(n, n))
row_sums = np.array(K.sum(axis=1)).flatten()
Kii = sp.diags(row_sums, format='csc')
K_R = Kii-K
boundary_nodes = []
with open('bloque_nastran3_acase.BOUNDARY_CONDS.data', 'r') as f:
for line in f:
# Busca líneas que definen temperaturas de contorno
if line.strip().startswith('T') and '=' in line:
# Extrae el número de nodo (ej. 'T37' -> 37)
node_str = line.split('=')[0].strip()[1:] # Elimina 'T' y espacios
try:
node_num = int(node_str)
boundary_nodes.append(node_num - 1) # Convertir a 0-based
except ValueError:
continue
"""
Reordena la matriz y vector según nodos libres y con condiciones de contorno.
Args:
sparse_matrix: Matriz de conductividad (Kii - K) en formato sparse
temperature_vector: Vector de temperaturas
boundary_file: Ruta al archivo .BOUNDARY_CONDS
Returns:
tuple: (matriz reordenada, vector reordenado, número de nodos libres)
"""
# Leer nodos con condiciones de contorno del archivo
constrained_nodes = boundary_nodes
size = K_R.shape[0]
# Verificar que los nodos de contorno son válidos
for node in constrained_nodes:
if node < 0 or node >= size:
raise ValueError(f"Nodo de contorno {node+1} está fuera de rango")
# Crear máscaras booleanas
bound_mask = np.zeros(size, dtype=bool)
bound_mask[constrained_nodes] = True
free_mask = ~bound_mask
# Índices de nodos libres y con condición de contorno
free_nodes = np.where(free_mask)[0]
constrained_nodes = np.where(bound_mask)[0]
# Nuevo orden: primero libres, luego con condición de contorno
new_order = np.concatenate((free_nodes, constrained_nodes))
num_free = len(free_nodes)
num_constrained = len(constrained_nodes)
# Reordenar matriz y vector
K_ordered = sp.csc_matrix(K_R[new_order][:, new_order])
K_IIRF = K_ordered[:num_free, :num_free]
K_IBR = K_ordered[:num_free,:num_constrained]
K_IBR_T = K_IBR.transpose()
K_BBR = K_ordered[:num_constrained,:num_constrained]
return K_IIRF,K_IBR,K_IBR_T,K_BBR
#K_IIRF_test,_,_,_ = construct_KR(conductividades)
#resta = K_IIRF - K_IIRF_test
#print("Norma de diferencia:", np.max(resta))
## Precalculos
K_IIDF_K_IBD = sp.linalg.spsolve(K_IIDF, K_IBD)
K_IIDF_KIBD_C_RD = C_RD_filtrada @ sp.linalg.spsolve(K_IIDF, K_IBD)
def calcular_epsilon_CMF(cond_vector):
"""
Calcula epsilon_CMF según la fórmula proporcionada.
Args:
epsilon_MO: Matriz de errores MO de tamaño (n, n_b).
epsilon_MT: Matriz de errores MT de tamaño (n_r, n_b).
Returns:
epsilon_CMF: Valor escalar resultante.
"""
nuevas_conductividades = cond_vector.tolist()
nuevo_GL = dict(zip(vector_coordenadas, nuevas_conductividades))
K_IIRF,K_IBR,K_IBR_T,K_BBR = construct_KR(nuevo_GL)
#epsilon_MQ = sp.linalg.spsolve(K_BBD,sp.csc_matrix(-K_IBD_T @ sp.linalg.spsolve(K_IIDF, K_IBD) + K_BBD) - (-K_IBR_T @ sp.linalg.spsolve(K_IIRF, K_IBR) + K_BBR ))
epsilon_MQ = sp.linalg.spsolve(K_BBD,((-K_IBD_T @ K_IIDF_K_IBD + K_BBD) - (-K_IBR_T @ sp.linalg.spsolve(K_IIRF, K_IBR) + K_BBR )))
epsilon_MT = sp.linalg.spsolve(K_IIRF,K_IBR) - K_IIDF_KIBD_C_RD
# Suma de cuadrados (usando .power(2) y .sum() para sparse)
sum_MQ = epsilon_MQ.power(2).sum()
sum_MT = epsilon_MT.power(2).sum()
# Raíz cuadrada del total
epsilon_CMF = np.sqrt(sum_MQ + sum_MT)
return epsilon_CMF
def debug_callback(xk):
print("Iteración, error:", calcular_epsilon_CMF(xk))
cond_opt = so.minimize(
calcular_epsilon_CMF,
vector_conductividades,
method='Nelder-Mead',
options={'maxiter': 100, 'xatol': 1e-8},
callback=debug_callback
)
print("epsilon_CMF:", cond_opt)
The idea is that, with each iteration, the cond_vector changes and then it generates new matrices with the construct_KR, so it recalculates epsilon_CMF
Do you have a mathematical formulation of the optimization problem you're trying to solve? It would make it a lot easier for us to understand what it going on.
BFGS is a quasi-Newton method which makes use of derivatives.
Nelder-Mead however is a direct search method which only makes use of function values. The advantage is that you don't need derivatives, or even an explicit expression for the function you're trying to minimize. All you need is to be able to compute function values in the points the algorithm wants to examine. The disadvantage is that there is no guarantee that it will converge to the optimum, or even to a stationary point!
Alternatively, the initial value itself is the local minima and that is why both search return the same?
It looks like the value of the error is very insensitive to the values of the matrices K^R. I tried to modifiy one random value strongly and the value of epsilon_CMF changed like in the 14th decimal number, so maybe it is connected to it
This is all the mathematical formulation I have been provided with. The matrices K represent the conductivities matrix of two models: detailed ( super index D) and reduced (super index R). The idea is epsilon_MQ and epsilon_MT are both two error matrices, and I calculate the total error epsilon_CMF. In this case, I need to optimize the values of the elements inside the matrices K ^ R to minimize the total error. So, in the function of my code I use the vector "cond_vector" to build the different K^R matrices and try to reach the values for the "cond_vector" that minimize the total error epsilon_MQ
Sorry for missing the mathematics, without them the code looks kinda messy
Most likely your issues are due to scaling
We are currently learning Matrix equations in pre cal and I have come to realize that it takes a good 5 minutes to solve one equation. There has to got be a quicker and more efficient way of solving these problems.
Probably not. Doing Gauss Jordan elimination by hand is very time consuming. Take enough time to make sure you don't make any mistakes. Making an arithmetic error is absolutely the most time consuming thing you can possibly do.
I remember once we got to the end of our section covering this, my linear algebra professor told us, "and now instead of having to spend 10 minutes on every problem, you can just plug in all the problems to a some matrix calculator online, because honestly, nobody wants to spend a long time row reducing just to get to the next step of the problem."
Jokes aside, there's not really a quicker way to solve them. Most of the math is basic, it's just tedious to do a lot. If you have a 5x5 matrix, it's going to take a bit to fill in 25 entries, regardless of what method you use and especially if you're trying to show all your work. You get more efficient with it as you get used to doing it, but there's not really a faster method in general.
In general, there isn't apart from just getting much quicker at the basics of matrix multiplication (which is just arithmetic).
The good news is that it's quite easy to get computers to to do matrix arithmetic. Usually the courses (like Linear Algebra) that make you do lots of matrix calculations are doing it to make sure you have a good grounding in the fundamentals of how matrices interact with each other, so you can apply that understanding to more abstract questions about their behaviour, not so you can do lots of matrix calculations by hand in future courses.
Calculator
rref( [matrix] )
I don’t have a calculator with this function but I’ll keep that in mind for when I do.
Your smartphone or computer can do it.
As you learn more about matrices, more short-cuts and special cases and general techniques and abstractions are added to your toolkit. Now is the slowest you'll ever be.
I’d like to know the historical process behind two mathematical/numerical methods:
My question isn’t just who first wrote them down, but how they were invented:
I’d love to understand what kind of analogies, problems, or constraints guided these mathematicians — essentially, how they thought their way into discovering these methods, not just the final result. I’d appreciate a timeline, the key figures/papers, and especially what the inventors were trying to achieve at the time.
Autonne's work was theoretical in nature and was inspired by the study of linear transformations. He sought to generalize the polar representation of complex numbers. It later found applications in fluid mechanics where A = UH represents a Hermitian state matrix A as the product of a rotation U and a pure deformation H.
The Newton–Schulz method is a matrix inversion algorithm that applies the principles of the scalar Newton's method to compute matrix inverses without performing a direct inversion. The main drivers for its development were the limitations of early computers. The algorithm is advantageous for modern high-performance computing because its operations can be parallelized.
References:
Hey! How should I solve h(t) from this system of these equations (if it's even possible, sorry if it's a dumb question) https://i.gyazo.com/f4df0be6db25e59cb8b10c3610caf3ba.png numerically(euler's method or something like that...?)
Initial conditions being something like this : h(0) = 0, h'(0) = 0 h''(0) = 0
There are built in solvers like ODE45, they take a little getting used to (you'll need to create a function to evaluate the derivative and give the routine a function handle. Take a look at this example https://www.mathworks.com/help/matlab/math/solve-nonstiff-odes.html
Maybe just go on Wikipedia and search for Runge And Kutta method and try to write the code by yourself. If it's the first time you use it, it's a good exercice :) let me know if you need help
Where is the stiffness?
If it's in a linear term, like
dx/dt = Nonlinear non stiff stuff - (very large number) * x
then there are various good methods. This is often the case in real world equations.
If the stiffness is nonlinear there's usually not much you can do about it.
I am not so sure, honestly, where it's coming from. How would I find that out? I have a very nonlinear system of equations, though. It's for a chemical/process system.
In chemical process equations the stiffness usually originates from the exponential Arrhenius source term
Ok can you show me the equations and where the large or small parameter is?
There are still good things for nonlinear stiffness, they're just more complicated.
Hi, i'm relatively new to both python and math (I majored in history something like a year ago) so i get if the problem i'm about to ask help for sounds very trivial.
My code has started running super slow out of nowhere, i was literally running it in 30 seconds, despite the multiple nested loops that calculated 56 million combinations, it was relatively ok even with a very computationally heavy grid search for my parameters. I swear, i went to get coffee, did not even turn down the pc, from one iteration to the other now 30 minutes of waiting time. Mind you, i have not changed a single thing
(these are three separate pi files, just to illustrate the process I'm going through)
FIRST FILE:
std = np.linalg.cholesky(matrix)
part = df['.ARTKONE returns'] + 1
ψ = np.sqrt(np.exp(np.var(part) - 1))
emp_kurtosis = 16*ψ**2 + 15*ψ**4 + 6*ψ**6 + ψ**8
emp_skew = 3*ψ + ψ**3
intensity = []
jump_std = []
brownian_std = []
for λ in np.linspace(0,1,100):
for v in np.linspace(0,1,100):
for β in np.linspace(0,1,100):
ξ = np.sqrt(np.exp(λ*v**2 + λ*β**2) - 1)
jump_kurtosis = 16*ξ**2 + 15*ξ**4 + 6*ξ**6 + ξ**8
jump_skew = 3*ξ + ξ**3
if np.isclose(jump_kurtosis,emp_kurtosis, 0.00001) == True and np.isclose(emp_skew,jump_skew, 0.00001) == True:
print(f'match found for: - intensity: {λ} -- jump std: {β} -- brownian std: {v}')
SECOND FILE:
df_3 = pd.read_excel('paraameters_values.xlsx')
df_3.drop(axis=1, columns= 'Unnamed: 0', inplace=True)
part = df['.ARTKONE returns'] + 1
mean = np.mean(part)
ψ = np.sqrt(np.exp(np.var(part) - 1))
var_psi = mean * ψ
for i in range(14):
λ = df_3.iloc[i,0]
β = df_3.iloc[i,1]
v = df_3.iloc[i,2]
for α in np.linspace(-1,1,2000):
for δ in np.linspace(-1,1,2000):
exp_jd_r = np.exp(δ +λ - λ*(np.exp(α - 0.5 * β **2)) + λ*α + λ*(0.5 * β **2))
var_jd_p = (np.sqrt(np.exp(λ*v**2 + λ*β**2) - 1)) * exp_jd_r **2
if np.isclose(var_jd_p, var_psi, 0.0001) == True and np.isclose(exp_jd_r, mean, 0.0001) == True:
print(f'match found for: - intensity: {λ} -- jump std: {β} -- brownian std: {v} -- delta: {δ} -- alpha: {α}')
FUNCTIONS: because (where psi is usally risk tolerance = 1, just there in case i wanted a risk neutral measure)
def jump_diffusion_stock_path(S0, T, μ, σ, α, β, λ, φ):
n_j = np.random.poisson(λ * T)
μj = μ - (np.exp(α + 0.5*β**2) -1) * λ *φ + ((n_j * np.log(np.exp(α + 0.5*β**2)))/T)
σj = σ**2 + (n_j * β **2)/T
St = S0 * np.exp(μj * T - σj * T * 0.5 + np.sqrt(σj * T) * np.random.randn())
return St
def geometric_brownian_stock_path(S0, T, μ, σ):
St = S0 * np.exp((μ-(σ**2)/2)*T + σ * np.sqrt(T) * np.random.randn())
return St
I know this code looks ghastly, but given it was being handled just fine, and all of a sudden it didn't, i cannot really explain this. I restarted the pc, I checked memory and cpu usage (30, and 10% respectively) using mainly just two cores, nothing works.
i really cannot understand why, it is hindering the progression of my work a lot because i rely on being able to make changes quickly as soon as i see something wrong, but now i have two wait 30 minutes before even knowing what is wrong. One possible issue is that these files are in folders where multiple py files call for the same datasets, but they are inactive so this should not be a problem.
:there's no need to read this second part, but i put it in if you're interested
THE MATH: I'm trying to define a distribution for a stochastic process in such a way that it resembles the empirical distribution observed in the past for this process (yes the data i have is stationary), to do this i'm trying to build a jump diffusion process (lognormal, poisson, normally distributed jump sizes). In order for this jump diffusion process to match my empirical distribution i created two systems of equations: one where i equated the expected value of the standard brownian motion with the one of the jump diffusion, and did the same for the expected values of their second moments, and a second where i equated the kurtosis of the empirical distribution to the standardised fourth moment of the jump diffusion, and the skew of the empirical to the third standardised moment of the jump diffusion.
Since i am too lazy to go and open up a book and do it the right way or to learn how to set up a maximum likelihood estimation i opted for a brute gride search.
Why all this??
i'm working on inserting alternative assets in an investment portfolio, namely art, in order to do so with more advance techniques, such as CVaR or the jacobi bellman dynamic programming approach, i need to define the distribution of my returns, and art returns are very skewed and and have a lot of kurtosis, simply defining their behaviour as a lognormal brownian motion with N(mean, std) would cancel out any asymmetry which characterises the asset.
thank you so much for your help, hope you all have a lovely rest of the day!
A more general problem - this for α in np.linspace(-1,1,2000): for δ in np.linspace(-1,1,2000):
is very very dodgy.
You should never be looping in Python over numpy/pytorch/etc arrays. You should be figuring how how to perform these calculations with computations on the whole matrix.
Speed ups of an order of magnitude (10x) and more are possible this way.
(But super kudos for using the right Greek letters, very elegant.)
Thank you for the suggestion, I will try to vectorize it, the problem is that mathematically speaking I would know how to set up the matrices, but in terms of coding I have no idea outside of storing the values in arrays.
PS: there is an extension for the Greek letters on visual studio, it's actually very easy to use, made me feel fancy
Are you using the exact same dataset? It might be a scenario where it has a fast best case runtime but a slow worst case, for example.
I am not good enough at math to parse the symbols so I haven't really tried but the general structure of the code doesn't seem to have any clear early exit condition so it would make sense that it will always be going over the entire dataset, and that could take a while with such big numbers.
I would time if what is taking so long is the pd import or one of the loops using some basic print statements. At least one section is the culprit, try isolating them so you know which one. I'd put a print statement with a timestamp before first loop, then before pd import and then before second loop.
thank you so much for the insight! i will try to time it, even though i'm 99% sure the culprit is the second loop, as the code gets resolved rather quickly if i just lower the subdivisions of the linspace. The dataset i'm using is fixed, reddit merged the two code blocks, but in reality the two loops are in different py files so they are not getting executed together. First loop sotores the values in a df and then i continue onto the second loop. this is so weird, same dataset and same code now running super slow. Worst thing is i cannot reduce linspae that much because if i lose granularity i will definitely get wrong resutls.
I see. I assumed they were in the same file. Either way as others have said, profiling.
You need to figure out what exactly is taking so long, and where. There are different ways to do this like using print debugging, an actual debugger or a timing library for example.
Also, I would test with a smaller dataset. If possible.
I'll also have a feeling this is related to the size of the data or change in parameterization. If literally nothing change, the difference would come from other programs eating up the computers resources.
I'm on my phone and can't go through the code in detail, however, loops in numpy/pandas code is generally a place for optimization. If you can convert it to numpy/pandas operations that can be vectorized you will gain performance. Then the looping is done in the C/Cython core of the libraries which is much faster than Python.
thank you for the suggestion! will try that as well!
thanks for the reply! i will try but i'm not sure i'm knowledgeable enough to get to any kind of conclusion through profiling, but it surely cannot hurt!
It's a great tool to learn, very easy as well. You can always just do then = datetime.datetime.now() and after a few lines do print(datetime.datetime.now() - then) to see how long those few lines took to execute.
Sidenote, it's probably not smart using the psi symbol, I mean it won't hurt if the code works, but if you need to use the variable a lot, are you going to keep it in clipboard for pasting all the time? Just use the name psi.
Your code as posted doesn’t make sense, it actually does nothing. There is no exit from all your nested loops, and you don’t show your import statements. You also define the same variables multiple times, which I don’t understand.
You also don’t run the two defined functions.
I assume this is just an artifact of the way you posted it. Maybe make a GitHub gist and link to it?
Hey you're right, I'm sorry I haven't fixed the issue, they are technically three separate py files, two of them just got merged together
I was thinking, but at the same time would it switch during a 15 minutes break with the terminal still open?
numerical methods for solving matrix equations
Key Considerations for Numerical Methods in Matrix Equations
Types of Matrix Equations:
Common Numerical Methods:
Stability and Convergence:
Software and Libraries:
Recommendation: For solving large linear systems, consider using iterative methods like the Conjugate Gradient method, especially if the matrix is sparse and symmetric. For smaller systems, Gaussian elimination or LU decomposition can be efficient. Always analyze the condition number of the matrix to assess numerical stability before choosing a method.
Get more comprehensive results summarized by our most cutting edge AI model. Plus deep Youtube search.