TL;DR
Gaussian Elimination
Gaussian elimination is a widely used method for solving systems of linear equations. It involves transforming the system into an upper triangular form and then using back-substitution to find the solution [2:1]. This method is particularly efficient for dense matrices and is considered the standard approach, requiring approximately ( \frac{2}{3} n^3 ) arithmetic operations, where ( n ) is the number of unknowns
[3:1].
Matrix Inversion and Cramer's Rule
For smaller systems, such as 2x2 or 3x3 matrices, using the general formula for matrix inversion can be a quick way to solve linear equations [2:1]. Cramer's Rule is another technique that can be effective for these smaller systems when you only need one component of the solution vector. However, it becomes less practical for larger systems due to the complexity of computing determinants
[2:1].
Software Tools
For those looking for computational solutions, software tools like MATLAB and SciPy's linalg.solve
function can provide efficient methods for solving linear equations [3:3]
[3:4]. These tools are particularly useful for handling large systems or when manual computation becomes cumbersome.
Understanding Row Operations
When working with augmented matrices, performing row operations correctly is crucial for simplifying the system and finding solutions. It's important to pay attention to special cases, such as dividing by expressions that could lead to undefined operations (e.g., division by zero) [5:1]. Understanding how these operations affect the determinant and the potential solutions is key to mastering this method
[5:2].
Considerations Beyond the Discussions
While these methods are effective, it's also important to consider the specific characteristics of your system, such as sparsity or symmetry, which might allow for more specialized techniques. Additionally, understanding the theoretical background, such as linear algebra concepts, can enhance your ability to apply these methods effectively.
What are some methods to solve linear and quadratic equations in any form?
Linear equations:
Quadratic equations:
Can anyone share the most effective manipulation techniques for solving systems of linear equations? We know the algebraic properties and laws, but manipulation refers to how we technically apply them like substitution, elimination, or matrix methods to calculate the solution more efficiently.
For 2x2-systems, use the general formula for the matrix inverse -- 1 step, done.
If you just need one component of the solution vector, and your system is 2x2 or 3x3, use Cramer's Rule. For larger systems, don't bother, since the determinants quickly get more involved to compute than just using Gauss-Jordan.
Otherwise/in general, Gauss-Jordan until upper echelon form, then back-substitution.
It's the best for solving by hand, if you need the entire solution vector. If you only deal with systems of equations over "Z", you can slightly modify the approach to obtain the Smith Normal Form, but that's probably more advanced than what you're looking for.
Thx for the insight; will take note of this, just need to refresh a bit.
i am so tired of solving it the long way
Unfortunately, your submission has been removed for the following reason(s):
If you have any questions, please feel free to message the mods. Thank you!
Try Matlab?
scipy.linalg.solve
For a general dense matrix, gaussian elimination with partial pivoting is the standard. Takes (2/3)n^(3) arithmetic operations, where n is the number of unknowns. https://en.wikipedia.org/wiki/Gaussian_elimination#Computational_efficiency
For some types of linear systems you can do much better.
Hello guys!
At first, I know there is no best method and it every method fits a problem better or worse. However, my question is: what is the best or better said most popular line search method for solving nonlinear equations?
Thanks in advance.
In my experience, if you have access to Hessians, Newton-Raphson iterations work amazingly well for solving non-linear optimization problems. Of course, that assumes you have second-order differentiability.
I'm already using this method, however I'm not happy with its convergence behavior if use initial points that are not sufficiently close to the solution. I read that global convergence can be used by applying a line search technique.
My problem is that there are many methods to do a line search and I wanted to know if there is a very popular one or one which is effective for a large range of problems.
I don't think you can assume anything about global convergence with line search methods. The most popular are probably Newton-Raphson or interior point methods to do local searches. There is research in using Gaussian regression processes to sample global spaces but those only work in relatively low dimensional systems and are not trivial to implement.
It depends on your usage and problem. There Is no best, some are slower, others are fast to calculate and converge. For simple root finding methods, There's two methods commonly used: Newton and Bisection.
Newton Raphson is quite popular. It has a few issues to be wary of: overshoot, local minima, and when the line is nearly tangent to x axis. It may not work for every problem.
Bisection is another method, said to be slower in iterations (quadratic convergence) but robust. It avoids the problems of Newton such as overshooting and having the line nearly tangent to x axis problem, but it can be limited by range (if the solution isn't in the range you defined, it's not gonna solve it unlike newton). So it can't solve every problem either.
Secant.. I'm not so knowledgeable.
There's a few hybrid methods. Dekker-Brent uses secant+ bisection, and There's Newton+bisection hybrid. Bisection can also be parallelized as well with more intervals so it's another way to speed up calculations. These methods can be more robust.
Kinda sidetracking, There's also a whole family of Newton solvers known as householder methods, which include Newton's method and Halley's method (yep! Another root finding. Lower iterations than newton but more mathematically intense).
OSCAR Veliz's YouTube channel is a great resource on most of these traditional methods.
Other optimizers and algorithms can be used to solve root finding as well (but not the other way around). They can be considered overkill and difficult to calculate by hand. But they can sometimes converge to the optimal solution better than newton and bisection. Python's sci-kit has a lot of them available.
Hi mathletes!
I'm really having a hard time understanding how to get the answers to this question.
I can create the augmented matrix but I'm really stuck/bad at simplifying and performing row operations. I seem to choose the wrong method and this get the wrong answer(s)
Do you have any tips or suggestions??
Thank you so much in advance!
There's a "simple" solution but it's a bit annoying to use. A system of equation can be expressed in Matrix form (A*X=B where X is the vertical vector (1×3 here) of the variables x, y and z, A is the coefficients of x, y and z and B is everything not attached to a variable). According to Crammer, such a matrix equation has a solution if and only if the determinant of A is different of zero.
This gives us a polynomial dependent on k with roots -1 and 0. To know which one leads to impossible solutions and which one gives us a multiplicity of solutions is a bit tricky but the way I'd go about it would be to plug the values in the system and see what happens. If you plug 0 in, you'll see that equation 1 becomes the negation of equation 3 (-x + 3y +2z = 0 and x -3y -2z = 0) which means that it's redundant information and thus does not bring anything to the table. This gives us our multiplicity of solutions.
If you plug -1 in, looking at the same equations you'll notice their results are not coherent. You get -x + 3y +2z = -1 and x - 3y - 2z = 0 which is of course impossible. This gives us the answer to the first question.
The answer to the second one is literally everything else.
Another way of doing it is to do it intuitively and use the way the questions are written to your avantage. Thanks to 1 and 3 you know there's only 1 impossible value and 1 undetermined value while the rest gives you a unique solution. So try to find that.
Again, playing with equations 1 and 3 gives you the value of z(k) from which you can determine that -1 is an impossible value (leads to a division by 0). It's the simplest way to get a variable so it's not an unreasonable assumption that someone would go for that one first when solving. Additionally, intuitively, you'll see that 1 and 3 are identical aside from the presence of the addends in k. If they are set to 0, the equations are identical and 3 becomes redundant, leading to an infinity of solutions.
These two give you the answer to the second question which is everything that's not -1 and 0.
But the "proper" way to do it is to use Cramer and show that the determinant is different from 0 in all cases save -1 and 0.
sorry for the late reply, but thank you so much for your detailed answer! That's really helped me understand the concepts a lot better!
Can you show your work? I would do the row reduction with the augmented matrix. At some point, you would divide by 2k+2, so there is a special case when k = -1. At another point, you would divide by k(k+1) so there is another special case when k =0.
Hello,
I have not solved differential equations for several years and have recently had the task of understanding and solving them again, so I picked up my old textbook and decided to "relearn" them. Thinking about it, i almost feel like some of the methods taught are either useless in practicality, or require intense memorization. So I ask, are there any specific methods i should solely practice, ignoring the others? Some methods i can see through the table of contents include:
-Separation of variables (for separable equations)
-The use of integrating factors, substitutions and transformations for first order equations that are not separable.
-method of undetermined coefficients and variation of parameters / annihilator method
-Laplace Transforms
-Series solutions
-Matrix Methods
​
In my opinion, i feel as if i should only focus on the final three, given that i (possibly ignorantly) think most of the equations i may encounter may be solved in this way, or numerically using python, for example.
​
I want to know the communities opinion though, what do you guys think?
​
Thank you
I agree with u/EatThePinguin’s comment. Analytical solutions can be found for only a small class of ODEs, with the simplest being linear and systems of linear ODEs.
If you are dealing with more general differential equations, it’s more practical to find numerical or asymptotic solutions. Another approach is to look at stability and nature of solutions(i.e., periodic or more complicated orbits and bifurcation) which falls under the subject of dynamical systems.
If your diff eq are non linear and have no specific other properties, I'd expect most to have no known analytic solution, and definitely no general solution method.
Are you going to encounter a certain class of diff eq? What is the problem area?
Control system theory
I do a lot of controls-related work. Knowing solution methods for ODEs beyond matrix exponentiation hasn't been very useful for solving control problems. Typically, if your dynamics are nonlinear, you have little hope in arriving at an exact solution anyway.
using the DSolve and NDSolve commands in mathematica is a good method
Or ode45 etc. in Matlab
Integrating factors and variation of parameters are completely useless. Matrix methods, as you call them, are just higher dimensional ODEs.
Perturbation theory and homotopy methods (the original 1999 paper by He has well over 2000 citations so far)
Maybe a bit embarrassing to ask but my exposure to numerical methods is limited so far. I've been trying to develop my own finite solver for me to learn more about how it all works and I've been reading what other people have done but one method captured by attention but I'm stumped on what it is. I've attached the photos below.
I've searched everywhere hoping to find a paper or something online that describes this method but no luck. The Lagrange Multipliers I'm finding online aren't related to what's covered here, since everything I'm finding is related to optimization. So what exactly is this method called, and is it worth exploring it?
Edit: thank you for the very detailed responses! they all pointed me to the right direction
I'm not sure on the specifics, as this particular topic is outside of my knowledge, but in general, this appears to be Lagrange's method of undetermined coefficients, it's usually used in optimisations.
Eigen vector/values
This is Lagrange multipliers used on a positive semidefinite quadratic form with matrix notation. That's what you should Google. If you search for "capon beamformer Lagrange multipliers" or "MVDR Lagrange multipliers" I am confident you will find a fairly detailed, step by step solution somewhere on the web. I've seen it, but I don't have a reference handy. It's out there. Bear in mind that MVDR is just one example of such a problem. It's not the same as what you're looking at, but it's close enough not to matter a whole lot.
They are just using matrices for all their notation. If they weren't doing that, it'd be ordinary calc 3.
Pretty sure it is the regular lagrange multipiers, lamba is just a vector, to make Cu into a scalar, if you differentiate with respect to u's components and lambd's components you'll get the same system
Eq. 9.2 appears frequently in the numerical optimization and nonlinear programming literature. The specific equation is formulated/solved as a quadratic program and constructing eq. 9.3 is one approach. I believe, modern linear solvers can operate directly on the symmetric indefinite matrix to arrive at solutions to the variables (displacements in this case) and the Lagrange multipliers.
You say you’re trying to develop your own solver. Is this just a learning exercise? I would review the literature on symmetric indefinite matrix equation solvers. If you have access to back issues of SIAM journals, start there. Otherwise, if you’re just trying to get the job done, your time might be better spent looking for high quality computer solutions that are available.
Edited to include: look at routines MA21 and MA57 from the Harwell Subroutine Library. I.A Duff
Thanks for the input.
It's something as a side project for grad school, hopefully to build my skills for interviewing. End goal is to develop it into something pretty elaborate but I'm at the starting stages.
I'll take a look at those articles.
Looks good to me
If you’re ever in doubt, sub back in. One wrong answer is enough to know you’re wrong (no amount of rights will prove it though)
thank you! how would i sub and double check in this situation?
Due to the nature of linearity of solutions I would substitute in the first vector, second (including the factor of y) and then the third with the z including the constant vector in both second and third
Using the third as an example
x= 5-6t, y=0, z=2t-3, t=t
Sub this into each line and see if it works out
I guess you could pick a value for y&t then substitute them into find values of x and z and see if all three of your original equations are satisfied
[deleted]
Can you give an example of the sort of thing that isn’t clicking?
So I like understand to a point. You are always looking to eliminate X and Y by getting them to eventually equal out. I can see that you use the least common multiple to start out with and I see his logic for the most part.
The thing I’m havin trouble with (and maybe I’m just the problem lol) is how does he just KNOW to multiply or divide. To me it seems like he’s just randomly selecting which one to use and just being like “ya that’s right” but isn’t explaining it at all. WHY and WHEN do we use multiplication and division?!
Wait I think I’m just being wack as hell. Ok so is it like a specific order of operations where division would be the only thing that would make sense in that scenario that I circled? Like I’m looking it over and it almost looks like division is the only function he could do that would make sense in that scenario because there’s no way 2x multiplied by 2 and 14 would give you a number that makes sense for the initial equation.
I THINK IM STARTING TO GET IT
2x+6=20 means you start with x, then multiply by 2, then add 6, and the result is 20. how do you reverse this procedure to determine what the starting number x was? go backwards and undo the steps one by one. the final step was "add 6, and the result is 20". what was the number before this step? well, if you added 6 and got 20, then you must have previously had 14. then, the step before was "multiply by 2", and we now know the result after doing this was 14. so what was the number before? if we multiplied by 2 and got 14, then the number must have been 7. all the steps have now been reversed, so we are done. x = 7
you mean like systems of linear equations?
Yes!!!!
I can assure you, if they're solving a linear system, they're performing equal operations on both sides or they're doing it wrong. Link an example and i (or anyone else on here) help you better understand
Your algebra is solid though right? like you could find x given that it is the only variable in a single equation?
Think of what an equals sign means.
If you do the same thing to what is on either side of that sign, then the actual equality must still hold true.
Perhaps you could give an example where it doesn't make sense, and someone can explain it to you.
Please show us a case where you were mystified. We can definitely explain what’s going on
First let's clear up a prerequisite. When you add one equation to another, you actually are doing the same thing to both sides. Because the two sides of an equation are equal, you are adding the same thing to both sides. Is that what you're confused about there?
Anyway, it sounds like you are trying to learn "steps" to math problems, but that's not how math or problem solving works. You just have tools and goals and then you do something.
For linear systems, the two main tools are substitution and combination. Good old substitution is totally fine! You just pick a variable, solve for it, plug it into the next one, and so on, and you'll be done!
The purpose of combination is that it's faster, not that it's necessary. But the order doesn't matter at all. There are, I think, 108 ways to solve a 3-variable problem, and at least 4 ways to write each step. So you have to just let go of looking for a "the way" to do a problem. Ultimately the point is to clear the equations column-by-column. But people jump around if they see a shortcut. Like if you see a 4 in one row and a -4 in another, you might as well get an easy 0 now.
If you don't know where to start with combinations, just go from the left column to the right, and from the top number to the bottom. Divide the top row by the first number to make it a 1, then use that to kill the rest of the column, repeat with the second number in the second column, and so on. That's how a computer does it. It's just that fractions are annoying to humans, so you can jump around a little.
And if you don't want to do combinations at all, just do substitution. But you absolutely cannot just try to emulate someone else's solution. The though process is "this is what I have, this is what I want, here is the list of theorems and tricks that might possibly move me in that direction, I'll try this one". If it helps, you repeat, if it doesn't, you go back and try a different one. Which is fine and normal.
A fantastic answer
Also ich bin seit einem Jahr mit meinem Abitur fertig, wollte nun aber meine "Mathe skills" wieder aufrischen und habe angefangen einige Themen zu wiederholen. Nun bin ich bei Linearen Gleichungssystemen angekommen und ich verstehe ehrlich gesagt nicht, was genau bezweckt wird, wenn man sie Auflöst. Was genau tue ich da eigentlich <: ?
Aus der Schule kennst du doch bestimmt noch Aufgaben/gleichungen wie : 4x=8. Die musste man lösen und man hat dann durch 4 geteilt und hat x=2 bekommen. Das war 1 Gleichung mit 1 variable.
Dann kamen Aufgaben mit 2 Gleichungen und 2 Variablen
Zum bsp 4x+2y=8 und 3x+2y=6. Jetzt ist es Aufgabe ein wertepaar (x/y) zu finden sodass beide Gleichungen erfüllt sind x=2, y=0, Wichtig hierbei ist,dass die Lösung für beide Gleichungen funktioniert. Zum bsp funktioniert das wertepaar x=0, y=4 nur für die erste genannte Gleichung. Jede Gleichungen für sich allein betrachtet hat unendlich viele Lösungen. Nur wenn man sagt es sollen beide gelten gibt es hier(gibt auch andere Fälle) genau eine Lösung die für beide Gleichungen gleichzeitig funktioniert.
Jetzt kann mam das beliebig hochschrauben. Man nimmt 3 Gleichungen und 3 Variablen. Wieder sollen alle diese 3 Gleichungen für ein wertepaar(eigtl.tripel) Gelten (x/y/z) .
Zum bsp könnten x y und z Einkaufspreise sein.
1 Person kauft 3x+4y+7z= 20 und zahlt 20 euro und wenn man den Einkauf noch von zwei weiteren Leuten kennt(sie kaufen nur die 3 genannten Produkte) kann man nun ausrechnen wie teuer die Produkte waren. Das ist jetzt zwar nicht der klassische Anwendungsfall aber man kann es sich so ganz gut vorstellen
Die Antworten lesen sich recht philosophisch, daher hier mal eine etwas stumpfere Antwort:
Ein Gleichungssystem (egal, ob linear oder nicht) ist eine Ansammlung an Gleichungen mit mehreren Variablen (du kannst es als mehrdimensionale Abbildung auffassen), die du alle gleichzeitig zu lösen versuchst. Hast du ein solches System also aufgelöst, erhältst du die Menge aller Kombinationen an Werten für die einzelnen Variablen, die dafür sorgen, dass alle diese Gleichungen gleichzeitig wahr sind.
Im Fall von linearen Systemen kannst du sogar etwas konkreter werden: Du suchst zu einer gegebenen Matrix einen Vektor, sodass nach Matrixmultiplikation ein entsprechender anderer Vektor herauskommt. Wozu braucht man das? Lineare Algebra ist der Grundstein für eine riesige Menge an Bereichen, von klassischer Mechanik und Elektrodynamik über Quantenmechanik und Optik zur allgemeinen Relativitätstheorie. Überall lassen sich Gleichungssysteme aufstellen, und überall lassen sich die fundamentalen Ideen, die man beim Lösen von LGS anwendet, weiterverwenden.
ich füge als Anwendung noch Machine Learning, Data Science, Statistik hinzu, was heutzutage wohl eher das "darauf hab ich Lust" ist als Optik und Mechanik.
Du suchst den Schnittpunkt von zwei Geraden.
Eine lineare Gleichung kann als Gerade in einem Koordinatensystem dargestellt werden.
Zwei Gleichungen sind zwei Geraden - die treffen sich irgendwo, sind parallel oder gleich.
Praktisch können das Kosten sein, oder Wegstrecken oder Kräfte - irgendwo sind sie gleich und dieser Punkt ist oft interessant, weil sich dort die Praxis ändert: erst war das eine teurer, danach ist es das andere.
Das hängt davon ab von welchem Blickwinkel du das betrachten möchtest.
Nach dem ersten Semester Mathestudium würde man sagen, man rechnet mit Matrizen.
Wenn man Mathematik ganz funktional als Modell der Wirklichkeit betrachtet würde man das wohl abhängig machen von den Informationen die man da gerade mit einem Gleichungssystem modelliert. Dann würdest du wenn du ein Gleichungssystem löst, z.B. die Fahrzeit deines Autos auf dem Weg von A nach B berechnen.
Wenn man Mathematik fundamental betrachtet, dann nutzt man einfach verschiedene Eigenschaften der Identiätsrelation aus um neue Identiätsausagen aufzustellen. Transitvität, Symmetrie, Reflexivität usw.
Ich als Philosoph würde einfach die Antwort geben, dass du aus einer Menge von Informationen die du als wahr akzeptierst, also Prämissen, eine Information steng logisch erschließt. Du stellst also beim "auflösen" eines Gleichungssystems nur ein deduktiv gültiges Argument auf.
Wow❤️, ich genieße deine Antwort sehr, weiß aber nicht, ob sie dem Kenntnisstand vom OP entspricht, wenn er sagt, er wiederholt Schulmathematik...😜
Wann immer du einen Sachzusammenhang mit mehreren Bedingungen (=Gleichungen) hast, findest du durch das Lösen des Gleichungssystems Werte für die Variablen, sodass alle Bedingungen erfüllt sind.
Best methods for solving linear equations
Here are some effective methods for solving linear equations:
Graphical Method:
Substitution Method:
Elimination Method:
Matrix Method (Gaussian Elimination):
Cramer's Rule:
Using Technology:
Recommendation: For beginners, the substitution and elimination methods are often the most straightforward and intuitive. As you advance, consider learning matrix methods for larger systems, as they provide a systematic approach to solving equations. Always double-check your solutions by substituting them back into the original equations!
Get more comprehensive results summarized by our most cutting edge AI model. Plus deep Youtube search.