Posts tagged math
How to calculate the work done by a variable force F(x)

To calculate the work done when a variable force is applied to lift an object of some mass or weight, we’ll use the formula W=integral [a,b] F(x) dx, where W is the work done, F(x) is the equation of the variable force, and [a,b] is the starting and ending height of the object.

Read More
Reviewing the basics of matrices for differential equations

We’ll learn much more about matrices in Linear Algebra. For now, we just need a brief introduction to matrices (for some, this may be a review from Precalculus), since we’ll be using them extensively to solve systems of differential equations.

Read More
Using variation of parameters to solve a system of nonhomogeneous differential equations

If undetermined coefficients isn’t a viable method for solving a nonhomogeneous system of differential equations, we can always use the method of variation of parameters instead. Just like with undetermined coefficients, we have to start by finding the corresponding complementary solution, which is the general solution of the associated homogeneous equation.

Read More
How to express polar points multiple ways by changing the values of r and theta

We can say that there are an infinite number of ways to express the same point in space in polar coordinates. We can 1) keep the value of r the same but add or subtract any multiple of 2π from theta, and/or 2) change the value of r to -r while we add or subtract any odd multiple of π from theta.

Read More
Finding the transpose of a matrix and then finding its determinant

The transpose of a matrix is simply the matrix you get when you swap all the rows and columns. In other words, the first row becomes the first column, the second row becomes the second column, and the nth row becomes the nth column. The determinant of a transpose of a square matrix will always be equal to the determinant of the original matrix.

Read More
Solving linear differential equations initial value problems

In the last lesson about linear differential equations, all the general solutions we found contained a constant of integration, C. But we’re often interested in finding a value for C in order to generate a particular solution for the differential equation. This applies to linear differential equations, but also to any other form of differential equation. The information we’ll need in order to find C is an initial condition, which is the value of the solution at a specific point.

Read More
Implicit differentiation for single variable functions

Up to now, we’ve been differentiating functions defined for f(x) in terms of x, or equations defined for y in terms of x. In other words, every equation we’ve differentiated has had the variables separated on either side of the equal sign. For instance, the equation y=3x^2+2x+1 has the y variable on the left side, and the x variable on the right side. We don’t have x and y variables mixed together on the left, and they aren’t mixed together on the right, either.

Read More
Imaginary and complex numbers and their properties

In this lesson we’ll look at the imaginary number i, what it means, and how to use it in expressions. The imaginary number i is defined as the square root of -1, and we can use it in algebraic expressions. An imaginary number (in general) is defined as a number that can be written as a product of a real number and i. For instance, 4i and -15i are imaginary numbers.

Read More
Linear combinations and span

The span of a set of vectors is the collection of all vectors which can be represented by some linear combination of the set. That sounds confusing, but let’s think back to the basis vectors i=(1,0) and j=(0,1) in R^2. If you choose absolutely any vector, anywhere in R^2, you can get to that vector using a linear combination of i and j. If I choose (13,2), I can get to it with the linear combination a=13i+2j, or if I choose (-1,-7), I can get to it with the linear combination a=-i-7j. There’s no vector you can find in R^2 that you can’t reach with a linear combination of i and j.

Read More
Undetermined coefficients for solving nonhomogeneous systems of differential equations

The method of undetermined coefficients may work well when the entries of the vector F are constants, polynomials, exponentials, sines and cosines, or some combination of these. Our guesses for the particular solution will be similar to the kinds of guesses we used to solve second order nonhomogeneous equations, except that we’ll use vectors instead of constants.

Read More
Cramer's rule for solving systems

Cramer’s Rule is a simple rule that lets us use determinants to solve a system of equations. It tells us that we can solve for any variable in the system by calculating D_v/D, where D_v is the determinant of the coefficient matrix, with the answer column values substituted into the column representing the variable for which we’re trying to solve, and where D is the determinant of the coefficient matrix.

Read More
Pivot entries and row-echelon forms

Now that we know how to use row operations to manipulate matrices, we can use them to simplify a matrix in order to solve the system of linear equations the matrix represents. Our goal will be to use these row operations to change the matrix into either row-echelon form, or reduced row-echelon form.

Read More
Solving linear differential equations

To investigate first order differential equations, we’ll start by looking at equations given in a few very specific forms. The first of these is a first order linear differential equation. First order linear differential equations are equations given in the form dy/dx+P(x)y=Q(x).

Read More
Power rule for derivatives

It’ll be faster for us to use the derivative rules we’re about to learn. In this lesson, we’ll look at the first of those derivative rules, which is the power rule. Power rule tells us that, to take the derivative of a function like these ones, we just multiply the exponent by the coefficient, and then subtract 1 from the exponent.

Read More
How to solve Ax=b, given some specific vector b

We know how to find the null space of a matrix as the full set of vectors x that satisfy Ax=O. But now we want to be able to solve the more general equation Ax=b. In other words, we want to be able to solve this equation when that vector on the right side is some non-zero b, instead of being limited to solving the equation only when b=O.

Read More
Solving initial value problems with general forcing functions using a convolution integral

Convolution integrals are particularly useful for finding the general solution to a second order differential equation in the form ay''+by'+cy=g(t). Notice in this equation that the forcing function g(t) is not defined explicitly. Without a convolution integral, we wouldn’t be able to find the solution to this kind of differential equation, even given initial conditions.

Read More
Phase portraits for systems of differential equations with complex Eigenvalues

Now we want to look at the phase portraits of systems with complex Eigenvalues. The equilibrium of a system with complex Eigenvalues that have no real part is a stable center around which the trajectories revolve, without ever getting closer to or further from equilibrium. The equilibrium of a system with complex Eigenvalues with a positive real part is an unstable spiral that repels all trajectories. The equilibrium of a system with complex Eigenvalues with a negative real part is an asymptotically stable spiral that attracts all trajectories.

Read More