TL;DR Linear equations have a constant rate of change and can be expressed as a sum of terms, each involving a constant multiplied by the variable or its derivatives. Nonlinear equations involve variables raised to powers other than one, multiplied together, or in functions such as exponentials.
Linear Equations
Linear equations are characterized by their ability to be expressed as a linear combination of variables or their derivatives. For example, in linear models, the relationship between the dependent variable (y) and independent variables (x) is expressed as (y = \beta_0 + \beta_1 x), where each term is either a constant or a product of a constant and a single variable [1:6]. In differential equations, a linear ordinary differential equation (ODE) takes the form (a_n(x) \frac{d^n y}{dx^n} + \ldots + a_0(x)y = g(x)), where (a_i(x)) are functions of (x) but do not involve (y) or its derivatives
[4:4].
Nonlinear Equations
Nonlinear equations involve terms where variables are multiplied together, raised to powers other than one, or appear in non-linear functions like exponentials or logarithms. For instance, an equation like (y = 10 + 0.5^x) is nonlinear because the variable (x) is in the exponent [1:10]. In differential equations, any equation that cannot be expressed as a linear combination of the dependent variable and its derivatives is considered nonlinear. For example, (dy/dx - 3xy^2 = x^3) is nonlinear due to the (y^2) term
[5:1].
Linear in Parameters vs. Linear in Variables
A common source of confusion arises from the distinction between being "linear in parameters" and "linear in variables." A model can be linear in its parameters even if it involves transformations of the variables. For example, polynomial regression is linear in parameters despite involving terms like (x^2) because the coefficients are not multiplied by each other or the variables [2:4]. Similarly, a linear ODE may involve nonlinear functions of the independent variable (x), but remains linear as long as (y) and its derivatives appear linearly
[4:1].
Applications and Implications
Understanding whether an equation is linear or nonlinear has practical implications for modeling and solving problems. Linear models are often easier to analyze and provide closed-form solutions, whereas nonlinear models may require numerical methods or approximations [2:7]. In econometrics, linear models are preferred due to their simplicity and ease of interpretation, although they may not always capture complex relationships accurately
[2:6].
I am having a hard time grasping what a linear model is. Most definitions mention a constant rate of change, but I have seen linear models that are straight and some that are curved. So that cannot be true. I have a ton of examples: Y = B0 + B1X, linear … Y = 10 + 0.5X, linear … Y = 10 + 0.5X1 + 3X1X^2 , linear … Y = 10 + 0.5X - 0.3X2, linear … Y = 10 + 0.5^X, not linear …
Why? What is the difference? I can see it, our explanatory variable X is an exponent, it cannot be linear. Why? What does the relationship between x and y have to be in order to be linear? What are the rules here? I’m not even sure I understand what the word linear means anymore.
After scrolling many a threads to no avail, please explain to me like I am five.
You are right. In a linear model we expect that the change rate is constant. After all each predictor gets only one parameter that indicates the strength of the association with the dependent variable. So there is no room for something else than a constant change.
But if we notice a non-linear association (e.g. in a scatterplot) this does not mean that we cannot perform a linear model. But we need a way to achieve more than one parameters that show us the non-linear association. If we, for instance, notice a curve-linear association we could add the same predictor twice in the model, x and x-squared. Now we get two parameters for the association between x and y, namely the first parameter (belonging to x) that indicates the slope of the line at x = 0 (or the overall linear trend if the x is mean centered and the second parameter (belonging to x-squared) that indicates the amount of curvature of the association (the amount of change in the slope as x progresses). Knowing both parameters, you can draw the association between x and y.
In a similar way you can also an x to the power of 3, 4 etcetera to get more parameters as the association gets more complex. This does not mean that all non-linear associations can be determined with a linear model, but you can get pretty far.
This definitely helped, thank you! I think I was having trouble differentiating between the graph appearing curved and it still being a linear equation. I’m understanding what people mean when they say “linear in its parameters.” So there is a constant rate of change within each individual parameter. Does non-linearity typically show in the event of the predictor being in an exponential position? Are there other examples of x being in a position where the equation cannot be written linearly?
I bet a lot of the confusion disappears when I say this:
The function y = 2x^2 is linear in *x^2*.
So if I wanted to fit a model that had a complicated shape, I could choose a linear model, but include squares of the predictors. Plotting the fit vs the predictor would give a shape that is not linear.
If you are seeing generalized linear models, the thing that "is linear" is just a transformation of something you are modeling. For instance, in logistic regression, the log-odds is linear in the predictors--but the probability is not.
while Y = 10 + 0.5^x is not linear log(Y) = 10*log(X) + 0.5 is linear
Ok, so a non-linear equation can be used for a LM so long as the equation can be written linearly? Do we typically see non-linear equations in the event of x being in the exponential position? If not, what are some other examples?
For what it’s worth, those two equations are not equivalent to some log-transform, they are entirely different models. The first cannot be expressed linearly. But sure, there are circumstances where the linear part is “in the exponent”. In general you can write linear models which relate the “linear piece” (sum of products of your X’s and corresponding parameters) to some response through a link function. You could have log-linear models (they are purely product models after exponentiating), you could have linear models relating to the square-root of your response, you could have linear models with radial transform, so on.
Keep in mind the linear model is emphatically not about specific X’s. It is about the parameters and how they relate to each other. When you say “the X is in the exponent”, remember there is most often not a single X. Your question cannot be answered as asked because it suggests a fundamental misunderstanding of terms. That’s why I’m (and others are) saying things like “the linear part” instead of focusing on one X.
Think of linear algebra. The linear model is
Y = XB + e
Where B is the vector of coefficients. In this approach, the coefficient cannot go into the exponent. There can be exponents, but they must be constant, such as X^2.
While correct, I don’t think this is helpful for OP
That's fine. I thought some of the other answers were providing good explanations already, but didn't see this aspect of it covered directly.
If OP found other explanations helpful and not this one, c'est la vie.
Generally, we mean that a model is "linear in its parameters".
You might find this post helpful: https://www.reddit.com/r/AskStatistics/s/vOawXTKjbW
Thank you so much. I have been understanding wrong till now!!
Please confirm if I am right. "Linear in parameter" doesn't necessarily mean "linear function of y".
I'll try to avoid confusing terms.
Linear in parameters does not mean that 'predictions for y
" will look like a line.
I understand this question has probably been asked many times on this sub, and I have gone through most of them. But they don't seem to be answering my query satisfactorily, and neither did ChatGPT (it confused me even more).
I would like to build up my question based on this post (and its comments):
https://www.reddit.com/r/statistics/comments/7bo2ig/linear_versus_nonlinear_regression_linear/
As an Econ student, I was taught in Econometrics that a Linear Regression model, or a Linear Model in general, is anything that is linear in its parameters. Variables can be x, x^(2), ln(x), but the parameters have to be like - β, and not β^(2) or sqrt(β).
Based on all this, I have the following queries:
1) I go to Google and type nonlinear regression, I see the following images - image link. But we were told in class (and also can be seen from the logistic regression model) that linear models need not be a straight line. That is fine, but going back to the definition, and comparing with the graphs in the link, we see they don't really match.
I mean, searching for nonlinear regression gives these graphs, some of which are polynomial regression (and other examples, can't recall) too. But polynomial regression is also linear in parameters, right? Some websites say linear regression, including curved fitting lines, essentially refer to a hyperplane in the broad sense, that is, the internal link function, which is linear in parameters. Then comes Generalized Linear Models (GLM), which further confused me. They all seem the same to me, but, according to GPT and some websites, they are different.
2) Let's take the Exponential Regression Model -> y = a * b^x. According to Google, this is a nonlinear regression, which is visible according to the definition as well, that it is nonlinear in parameter(s).
But if I take the natural log on both sides, ln(y) = ln(a) + x ln(b), which further can be written as ln(y) = c + mx, where the constants ln(a) and ln(b) were written as some other constants. This is now a linear model, right? So can we say that some (not all) nonlinear models can be represented linearly? I understand functions like y = ax/(b + cx) are completely nonlienar and can't be reduced to any other form.
In the post shared, the first comment gave an example that y = abX is nonlinear, as the parameters interacting with each other violate Linear Regression properties, but the fact that they are constants means that we can rewrite it as y = cx.
I understand my post is long and kind of confusing, but all these things are sort of thinning the boundary between linear and nonlinear models for me (with generalized linear models adding to the complexity). Someone please help me get these clarified, thanks!
> In the post shared, the first comment gave an example that y = abX is nonlinear, as the parameters interacting with each other violate Linear Regression properties, but the fact that they are constants means that we can rewrite it as y = cx.
Its still non-linear.
But how?
It can't be written as Y = a f1(X) + b f2(X)
> I mean, searching for nonlinear regression gives these graphs, some of which are polynomial regression
Which ones? You've just posted a link to a page of google image search results. There are thousands. Some of them might be incorrect. Who knows.
> But polynomial regression is also linear in parameters, right?
Correct.
> But if I take the natural log on both sides, ln(y) = ln(a) + x ln(b), which further can be written as ln(y) = c + mx, where the constants ln(a) and ln(b) were written as some other constants. This is now a linear model, right?
One is linear, the other is not. They are different models. For example, if the errors are normal and homoskedastic in one case, they won't be for the other. The log model ln(y) = c + mx is also modelling a different conditional mean (if you exponentiate the fit, you'll get the conditional geometric mean on the original scale).
> In the post shared, the first comment gave an example that y = abX is nonlinear, as the parameters interacting with each other violate Linear Regression properties, but the fact that they are constants means that we can rewrite it as y = cx.
Sort of, yes. The model y = abx is not identified, as there are infinitely many solutions (a,b) which give an identical conditional distribution for y. You can eliminate this redundancy by defining a new model with y = cx, as you said, which is a linear model. The first model is, strictly, not a linear model, but it's also not a model that anyone would ever use.
> Then comes Generalized Linear Models (GLM), which further confused me. They all seem the same to me, but, according to GPT and some websites, they are different.
Don't use ChatGPT. Generalized linear models are a broad family of models that expand the standard linear model to include cases where the conditional distribution of the response is non-normal, and a few other generalizations. It's an extremely broad class. They're also going to be difficult to understand until you have a very clear understanding of the general linear model, since generalized linear model allow for non-linear relationships between the predictors and the conditional response through the link function.
Thank you for such a detailed response!
>Which ones? You've just posted a link to a page of google image search results. There are thousands. Some of them might be incorrect. Who knows.
Yeah I agree, but I was referring to most of the websites I accessed from that link. But you're right.
>One is linear, the other is not. They are different models. For example, if the errors are normal and homoskedastic in one case, they won't be for the other. The log model ln(y) = c + mx is also modelling a different conditional mean (if you exponentiate the fit, you'll get the conditional geometric mean on the original scale).
Yep, I get that, but I was referring it in the context of say the Log-Linear Model, as used in econometrics. But building up on your answer, would you say that we can convert similar nonlinear regression models into a linearized format and use it. Then doesn't it defeat their inherent purpose of being nonlinear?
You also mentioned general linear model being different from generalized linear model. Could you elaborate a bit on that too?
I guess a few sample lectures from some graduate stats classes on model fitting would give me a deeper insight into this. I totally agree ChatGPT should not be a go-to source for all doubts, but I was not finding my answers anywhere.
Last query: in this whole process, including the points I raised in my post and the answers you gave, where does that leave us with the errors? I meant the u_i term. Linearizing a model would also include this term, right? How do we deal with that?
Could you suggest any sources - book chapters, videos, even research papers, etc.? Thanks again.
I think you're conflating multiple meanings of linear. Being linear in the coefficients (the usual definition) is different from being a straight line in (x,y) space. You can fit curves with a linear model by including terms like x^2 or logx, but that doesn’t make the model nonlinear in the statistical sense.
On your example, the model y = a * b^x is not the same as ln(y) = ln(a) + x ln(b). There’s an implicit error term, and transforming y also transforms the error. That changes the assumptions, target (e.g., mean vs. geometric mean), standard errors, and back-transforming should have a bias correction since E[log y | x] is not the same as log E[y | x]. So the fact you can rewrite something to look linear doesn’t mean you should, or that it even answers the same question.
Basically linear regression is linear in β, with additive errors, and curves via transforms are ok. Generalized linear models keep linear predictors but connect it to the mean with a link function and use a distribution that matches the outcome. It can be curved on the data scale but still linear in β after the link. Nonlinear regression uses a mean function that is genuinely nonlinear in its parameters. Your other example y=abX is technically non-identifiable until you reparameterize into a linear model (since only a*b matters).
This is a great example of the disconnect between economics and the "statistics in service of economics" (aka econometrics). Unlike other disciplines (especially the hard sciences), where statistical models are being derived from theory top-down, econometrics works bottom-up from a linear model. All economic content, if there is any, is to be bent and molded such that it dovetails with the Gauss-Markov assumptions.
Truth is, it doesn't matter whether your model is linear or nonlinear (in the parameters, the variables, or both). Those are estimator problems, not science problems. And if econometrics classes were to teach model-based thinking rather than the hyperfocus on estimators and their properties under unrealistic assumptions, we wouldn't see this confusion.
Watch McElreath's Statistical Rethinking, especially the last lecture, if you want more details.
>All economic content, if there is any, is to be bent and molded such that it dovetails with the Gauss-Markov assumptions.
I believe the reason for that is econometrics uses statistics as a means to delve and create insights from data, like most sectors, rather than using statistics in it's originality. It's just a means to delving into how useful a policy is or how a sample would behave hypothetically in a situation. The actual concepts of statistics go much deeper than this and may not be applicable completely to econometrics. Just my 2 cents.
>And if econometrics classes were to teach model-based thinking rather than the hyperfocus on estimators
That would defeat the purpose of econometrics because at the end of the day, it's different from applied ML and statistics. It's just a tool in the whole process of an end to end policy making.
Thanks for the link, I'll definitely have a look!
Most of the time it's a matter of parsimonious parametrisation given data availability.
Any nicely behaving nonlinear function of Xs can be represented as infinite (multo-)poly-nomial of Xs via Taylor expansion. Another question is that you need to estimate infinitely many parameters in this case.
So, if you know the functional relationship between y & Xs (quite a bold statement), you can run nonlinear estimation/transform nonlinear to linear and directly estimate parameters of interest. Otherwise, you can approximate unknown nonlinear function with polynomial regression.
Got your point, but I just referred to the polynomial regression as an example. I mean, I read the properties of Linear Regression and see linearity of parameters, then I open examples of Nonlinear Regressions and see graphical examples resembling polynomial regression. Wrong sources perhaps, but this mix-up created my confusion. However, I understand your point, thanks!
It is a matter of definitions. Different fields use similar stat methods and name them differently. The correct statement:
As long as the relationship is linear in parameters, you can apply stat machinery of linear regression modelling. Relationships between y and X in this case might or might not be linear.
Once the relationship is not linear in parameters, stat inference requires additional derivations.
What is the definition of linear ODE?
My initial answer was E, but I still got it wrong. I also picked C and E, which was also incorrect. I know that C, D, E, and F, are first order, but they are not all linear ODEs. I would appreciate your help.
I think the question is not pick one, but pick all.
In which case it should be C E and F?
Thanks for your response. How is F linear?
A linear ode is one of the form a1(x)d^n y/dx^n + ... +a_n(x)y
It doesn't matter if the a_i are nonlinear.
So everything except b and d are linear. We then remove A as it is not first order.
I have some questions regarding the definition of a linear ordinary differential equation. By definition, a linear ODE has the following form:
where aₙ(x) is not necessarily linear.
Based on this definition, can aₙ(x)=y(x)? If it can be, then my question is resolved. It's in the other case, where it cannot occur, where my question lies. Suppose I have an equation of the following form:
Now suppose we find the family of solutions y(x) = x² + k. I'm not sure if a solution like this makes sense, but let's assume it does.
Now with these assumptions:
Then, this form would no longer be a linear equation, so was it never linear?
Yes, it can turn out that aₙ(x) = y(x) is a solution.
Example. Suppose y = x^(2). Then y satisfies the linear differential equation
x^(2) y' – 2x y = 0.
Note that this is different, however, than the nonlineaar differential equation
y y' – 2x y = 0,
even though they both have y = x^(2) as a solution. The linear equation has general solutions of the form y = Ax^(2). The nonlinear equation has solutions of the form y = x^(2) + B.
This is similar to the algebraic equation
2x = 4.
This is a linear equation, and x = 2 is a solution, but it is also equal to the coefficient multiplying x in this equation. So we know that it is also a solution to the nonlinear equation
x x = 4.
(And there is a second solution to this second equation, x = –2, that the first equation doesn't have.)
Does that make sense?
Thank you very much. I understand it now. It’s not the same a specific solution and the general solution. Sorry for my English.
Which concept would you like help understanding?
linear/nonlinear
​
order
​
degree
​
dependent variable(s)
​
independent
variable(s)
Also ich bin seit einem Jahr mit meinem Abitur fertig, wollte nun aber meine "Mathe skills" wieder aufrischen und habe angefangen einige Themen zu wiederholen. Nun bin ich bei Linearen Gleichungssystemen angekommen und ich verstehe ehrlich gesagt nicht, was genau bezweckt wird, wenn man sie Auflöst. Was genau tue ich da eigentlich <: ?
Wann immer du einen Sachzusammenhang mit mehreren Bedingungen (=Gleichungen) hast, findest du durch das Lösen des Gleichungssystems Werte für die Variablen, sodass alle Bedingungen erfüllt sind.
Aus der Schule kennst du doch bestimmt noch Aufgaben/gleichungen wie : 4x=8. Die musste man lösen und man hat dann durch 4 geteilt und hat x=2 bekommen. Das war 1 Gleichung mit 1 variable.
Dann kamen Aufgaben mit 2 Gleichungen und 2 Variablen
Zum bsp 4x+2y=8 und 3x+2y=6. Jetzt ist es Aufgabe ein wertepaar (x/y) zu finden sodass beide Gleichungen erfüllt sind x=2, y=0, Wichtig hierbei ist,dass die Lösung für beide Gleichungen funktioniert. Zum bsp funktioniert das wertepaar x=0, y=4 nur für die erste genannte Gleichung. Jede Gleichungen für sich allein betrachtet hat unendlich viele Lösungen. Nur wenn man sagt es sollen beide gelten gibt es hier(gibt auch andere Fälle) genau eine Lösung die für beide Gleichungen gleichzeitig funktioniert.
Jetzt kann mam das beliebig hochschrauben. Man nimmt 3 Gleichungen und 3 Variablen. Wieder sollen alle diese 3 Gleichungen für ein wertepaar(eigtl.tripel) Gelten (x/y/z) .
Zum bsp könnten x y und z Einkaufspreise sein.
1 Person kauft 3x+4y+7z= 20 und zahlt 20 euro und wenn man den Einkauf noch von zwei weiteren Leuten kennt(sie kaufen nur die 3 genannten Produkte) kann man nun ausrechnen wie teuer die Produkte waren. Das ist jetzt zwar nicht der klassische Anwendungsfall aber man kann es sich so ganz gut vorstellen
Das hängt davon ab von welchem Blickwinkel du das betrachten möchtest.
Nach dem ersten Semester Mathestudium würde man sagen, man rechnet mit Matrizen.
Wenn man Mathematik ganz funktional als Modell der Wirklichkeit betrachtet würde man das wohl abhängig machen von den Informationen die man da gerade mit einem Gleichungssystem modelliert. Dann würdest du wenn du ein Gleichungssystem löst, z.B. die Fahrzeit deines Autos auf dem Weg von A nach B berechnen.
Wenn man Mathematik fundamental betrachtet, dann nutzt man einfach verschiedene Eigenschaften der Identiätsrelation aus um neue Identiätsausagen aufzustellen. Transitvität, Symmetrie, Reflexivität usw.
Ich als Philosoph würde einfach die Antwort geben, dass du aus einer Menge von Informationen die du als wahr akzeptierst, also Prämissen, eine Information steng logisch erschließt. Du stellst also beim "auflösen" eines Gleichungssystems nur ein deduktiv gültiges Argument auf.
Wow❤️, ich genieße deine Antwort sehr, weiß aber nicht, ob sie dem Kenntnisstand vom OP entspricht, wenn er sagt, er wiederholt Schulmathematik...😜
Du suchst den Schnittpunkt von zwei Geraden.
Eine lineare Gleichung kann als Gerade in einem Koordinatensystem dargestellt werden.
Zwei Gleichungen sind zwei Geraden - die treffen sich irgendwo, sind parallel oder gleich.
Praktisch können das Kosten sein, oder Wegstrecken oder Kräfte - irgendwo sind sie gleich und dieser Punkt ist oft interessant, weil sich dort die Praxis ändert: erst war das eine teurer, danach ist es das andere.
Die Antworten lesen sich recht philosophisch, daher hier mal eine etwas stumpfere Antwort:
Ein Gleichungssystem (egal, ob linear oder nicht) ist eine Ansammlung an Gleichungen mit mehreren Variablen (du kannst es als mehrdimensionale Abbildung auffassen), die du alle gleichzeitig zu lösen versuchst. Hast du ein solches System also aufgelöst, erhältst du die Menge aller Kombinationen an Werten für die einzelnen Variablen, die dafür sorgen, dass alle diese Gleichungen gleichzeitig wahr sind.
Im Fall von linearen Systemen kannst du sogar etwas konkreter werden: Du suchst zu einer gegebenen Matrix einen Vektor, sodass nach Matrixmultiplikation ein entsprechender anderer Vektor herauskommt. Wozu braucht man das? Lineare Algebra ist der Grundstein für eine riesige Menge an Bereichen, von klassischer Mechanik und Elektrodynamik über Quantenmechanik und Optik zur allgemeinen Relativitätstheorie. Überall lassen sich Gleichungssysteme aufstellen, und überall lassen sich die fundamentalen Ideen, die man beim Lösen von LGS anwendet, weiterverwenden.
ich füge als Anwendung noch Machine Learning, Data Science, Statistik hinzu, was heutzutage wohl eher das "darauf hab ich Lust" ist als Optik und Mechanik.
When you have dt/dx (x+-4t)=0, you forgot about dt/dx=0, so t=c, so y=cx+b. These will also give you solutions
As for your quadratics, (talking about x^2 =8y-7)
dy/dx =x/4, so you have |x-4y/x |=|x/2|
|x/2 +7/2 |=|x/2|
This is obv not true. That's because the functions you got do in fact have the same derivative (most times, not when x is in (-7, 0)), but because of the pesky constant they're not equal. You differentiated them, but you also have to check whether the resultant solutions satisfy the constraints.
So my approach was right, but the only mistake i did was not took dt/dx into consideration? and als did not verify the differential with the given equation(so here also we need to verify our solutions like we do in Inverse trignomteric function?)
Yups
This is also the part of the sirs solution
Please report any rule breaking posts and posts that are not relevant to the subreddit.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
I got this question where I was given a model for a non-stationary time series, Xt = α + βt + Yt, where Yt ∼ i.i.d∼ N (0, σ2), and I had to talk about the problems that come with using such a model to forecast far into the future (there is no training data). I was thinking that the model assumes that the trend continues indefinitely which isn't realistic and also doesn't account for seasonal effects or repeating patterns. Are there any long term effects associated with the Yt?
Yt as you’ve defined here is just noise, so there isn’t any long term effect associated with it. If you know what Xt represents, you might be able to say more about what kind of time frame would be reasonable for the parameters (alpha, beta, and/or sigma) to be assumed constant.
I'm confused since it can be solved to x - y = 0 And that can be considered a linear equation but as such (x/y) = 1 is not, can someone help me understand why that happens or how Is It called, thanks
In a linear equation all variables must be raised to the power of 1. x/y = 1 can be written as x*y^(-1) = 1, notice that y is raised to a negative exponent which isnt allowed for linear equations So in the form you mentioned it isnt a linear equation however x=y or x-y=0 are linear equations
but in the equivalence they are still the same x/y=1 to x=y. as long as they are not zero?
Yes. Honestly they really are equal for all intents and purposes(except for that restriction with y not equal to 0),but technically x/y=1 is noy linear because of y being in the denominator.
As long as y is not zero. x can be zero.
x/y = 1 isn't a linear equation. It can be transformed to a linear equation. But you still have to discard some solutions of the linear equation for the given equation.
Easier example: Take x = 0. It is linear. Multiply by (x-1). That gives you a quadratic equation. You can solve that quadratic but you have to discard a solution.
x/y = 1 is equivalent to the linear equation y = x, with the additional condition that x and y can't be zero (since y was the denominator, and x = y)
(x/y) = 1 is not a linear equation, but it is equivalent to the linear equation x = y, as long as y != 0.
Similarly, x^2 = y^2 is not a linear equation, although it is equivalent to the linear equations x = y, x = -y.
Equivalence is not equality. A certain equation A can have the same solution set as a linear equation B, but it does not mean that A is a linear equation. It just means that A is equivalent to B. A linear equation is linear because of the form of the terms, not because of the solutions.
Can anybody help me solve this? and what is it called specifically because i tried searching linear/non linear equations on youtube but cant find a tutorial on this type that has many x… Any help appreciated!
This is a cubic equation. There is a formula, but it’s pretty ugly. You may be better off checking a few way values and factoring. Try plugging in some small values, -1, 0, 1 and if you get a zero, then you can factor it out (x-xzero)(ax^2 + bx + c) where xzero is the value that gave you the zero (so -1, 0, or 1 from the ones I suggested as easy test cases). Then, solve for a b c and use the quadratic formula for the last two roots.
You don't need to pick arbitrary values to plug in. You can use the rational roots theorem which says that if a rational root exists it will be of the form +- p/q where p and q are some factors of the last and first coefficient respectively.
Absolutely. Just thought the rational root theorem was a bit advanced to bring up. But you’re certainly correct and it’s the right approach
Is the / in +-p/q supposed to be a stand in for the word “or” because my brain can’t stop reading it as “p divided by q” for some reason even though that makes no sense.
Factor it by grouping
x^2 * (2x - 3) - 1 * (2x - 3) </= 0
(x^2 - 1) * (2x - 3) </= 0
(x - 1) * (x + 1) * (2x - 3) </= 0
When does it equal 0?
x - 1 = 0 =>> x = 1
x + 1 = 0 =>> x = -1
2x - 3 = 0 =>> x = 3/2
So you now have 4 domains to look through:
(-inf , -1)U(-1 , 1)U(1 , 3/2)U(3/2 , inf)
Pick values from each domain and look at what the sign is.
x = -2 , x = 0 , x = 5/4 , x = 2 will tell you what you need to know.
Now I can tell ya, just from looking at the initial cubic, that (-inf , -1] and [1 , 3/2] are your domains, along with x = -1 (since it's equal to 0 there), but that's because cubics either start out negative and become positive (if the leading coefficient is positive) or they start out positive and become negative (if the leading coefficient is negative), and since there aren't any repeated roots, then I know that this one makes that nice standard cubic shape.
You can also evaluate these three:
x - 1
x + 1
2x - 3
And realise that to get a positive number, 0 or 2 of them need to be negative.
If X < -1 all three will be negative. So the results is negative.
If X > - 1 and X < 1: means 1) and 3) will be negative but 2 will be positive.
And lastly if X>3/2 all three will be positive
So the domains are X=[-1 ; 1] or X >/= 3/2
Notice that adding all the coefficients together gives you zero.
So now you have one root. See if there are others.
Sketch the graph.
Now you can see where it is above or below zero.
2x³-3x²-2x+3 <= 0
(2x-3)(x²-1) <= 0
(2x-3)(x+1)(x-1) <= 0
2(x-1.5)(x+1)(x-1) <= 0
x = -1, x = 1, x = 1.5 The variable x belongs to the range from negative infinite to -1 and from 1 to 1.5
If you need a short solution - find the roots. You can use the polynomial division. You need one root as a starting point. You should try 1,-1,3,-3 as values. If this is not applicable, use Newton method to calculate a root.
After that, split off one factor and then you get a quadratic equation, you can easily solve.
Short solution - draw the graph with the help of the roots, starting in the 3rd quadrant. Just look where the graph is below the x-axis. These are your intervals.
Long solution - write the equation in the factorised form and make a case differentiation. Two factors have to be positive, one negative to get sth negative in total. Quite tedious, you have three cases to look at.
differences between linear and nonlinear equations
Key Differences Between Linear and Nonlinear Equations
Definition:
Graphical Representation:
Degree:
Solutions:
Complexity:
Takeaway: Understanding the differences between linear and nonlinear equations is crucial for selecting the appropriate methods for solving problems in mathematics and applied fields. Linear equations are simpler and more straightforward, while nonlinear equations can model more complex relationships.
Get more comprehensive results summarized by our most cutting edge AI model. Plus deep Youtube search.