Posts tagged learn online
Imaginary and complex numbers and their properties

In this lesson we’ll look at the imaginary number i, what it means, and how to use it in expressions. The imaginary number i is defined as the square root of -1, and we can use it in algebraic expressions. An imaginary number (in general) is defined as a number that can be written as a product of a real number and i. For instance, 4i and -15i are imaginary numbers.

Read More
Linear combinations and span

The span of a set of vectors is the collection of all vectors which can be represented by some linear combination of the set. That sounds confusing, but let’s think back to the basis vectors i=(1,0) and j=(0,1) in R^2. If you choose absolutely any vector, anywhere in R^2, you can get to that vector using a linear combination of i and j. If I choose (13,2), I can get to it with the linear combination a=13i+2j, or if I choose (-1,-7), I can get to it with the linear combination a=-i-7j. There’s no vector you can find in R^2 that you can’t reach with a linear combination of i and j.

Read More
Undetermined coefficients for solving nonhomogeneous systems of differential equations

The method of undetermined coefficients may work well when the entries of the vector F are constants, polynomials, exponentials, sines and cosines, or some combination of these. Our guesses for the particular solution will be similar to the kinds of guesses we used to solve second order nonhomogeneous equations, except that we’ll use vectors instead of constants.

Read More
Cramer's rule for solving systems

Cramer’s Rule is a simple rule that lets us use determinants to solve a system of equations. It tells us that we can solve for any variable in the system by calculating D_v/D, where D_v is the determinant of the coefficient matrix, with the answer column values substituted into the column representing the variable for which we’re trying to solve, and where D is the determinant of the coefficient matrix.

Read More
Pivot entries and row-echelon forms

Now that we know how to use row operations to manipulate matrices, we can use them to simplify a matrix in order to solve the system of linear equations the matrix represents. Our goal will be to use these row operations to change the matrix into either row-echelon form, or reduced row-echelon form.

Read More
Solving linear differential equations

To investigate first order differential equations, we’ll start by looking at equations given in a few very specific forms. The first of these is a first order linear differential equation. First order linear differential equations are equations given in the form dy/dx+P(x)y=Q(x).

Read More
Power rule for derivatives

It’ll be faster for us to use the derivative rules we’re about to learn. In this lesson, we’ll look at the first of those derivative rules, which is the power rule. Power rule tells us that, to take the derivative of a function like these ones, we just multiply the exponent by the coefficient, and then subtract 1 from the exponent.

Read More
How to solve Ax=b, given some specific vector b

We know how to find the null space of a matrix as the full set of vectors x that satisfy Ax=O. But now we want to be able to solve the more general equation Ax=b. In other words, we want to be able to solve this equation when that vector on the right side is some non-zero b, instead of being limited to solving the equation only when b=O.

Read More
Solving initial value problems with general forcing functions using a convolution integral

Convolution integrals are particularly useful for finding the general solution to a second order differential equation in the form ay''+by'+cy=g(t). Notice in this equation that the forcing function g(t) is not defined explicitly. Without a convolution integral, we wouldn’t be able to find the solution to this kind of differential equation, even given initial conditions.

Read More
Phase portraits for systems of differential equations with complex Eigenvalues

Now we want to look at the phase portraits of systems with complex Eigenvalues. The equilibrium of a system with complex Eigenvalues that have no real part is a stable center around which the trajectories revolve, without ever getting closer to or further from equilibrium. The equilibrium of a system with complex Eigenvalues with a positive real part is an unstable spiral that repels all trajectories. The equilibrium of a system with complex Eigenvalues with a negative real part is an asymptotically stable spiral that attracts all trajectories.

Read More
Classifying differential equations by order, linearity, and homogeneity

Whereas partial derivatives are indicated with the “partial symbol,” we never see this notation when we’re dealing with ordinary derivatives. That’s because an ordinary derivative is the derivative of a function in a single variable. Because there’s only one variable, there’s no need to indicate the partial derivative for one variable versus another.

Read More
Solving differential equation initial value problems with step functions as forcing functions

In general, to solve the initial value problem, we’ll follow these steps: 1. Make sure the forcing function is being shifted correctly, and identify the function being shifted. 2. Apply a Laplace transform to each part of the differential equation, substituting initial conditions to simplify. 3. Solve for Y(s). 4. Apply an inverse transform to find y(t).

Read More
Estimating definite integrals using power series

We can use power series to estimate definite integrals in the same way we used them to estimate indefinite integrals. The only difference is that we’ll evaluate over the given interval once we find a power series that represents the original integral. To evaluate over the interval, we’ll expand the power series through its first few terms, and then evaluate each term separately over the interval.

Read More
Definition of a linear subspace, with several examples

A subspace (or linear subspace) of R^2 is a set of two-dimensional vectors within R^2, where the set meets three specific conditions: 1) The set includes the zero vector, 2) The set is closed under scalar multiplication, and 3) The set is closed under addition.

Read More
How to find unit vectors and basis vectors

Any vector with a magnitude of 1 is called a unit vector, u. In general, a unit vector doesn’t have to point in a particular direction. As long as the vector is one unit long, it’s a unit vector. But oftentimes we’re interested in changing a particular vector v (with a length other than 1), into an associated unit vector. In that case, that unit vector needs to point in the same direction as v.

Read More
Inverse hyperbolic integrals

Inverse hyperbolic functions follow standard rules for integration. Remember, an inverse hyperbolic function can be written two ways. For example, inverse hyperbolic sine can be written as arcsinh or as sinh^(-1). Some people argue that the arcsinh form should be used because sinh^(-1) can be misinterpreted as 1/sinh. Whichever form you prefer, you see both, so you should be able to recognize both and understand that they mean the same thing.

Read More