Let us imagine that we have a system of coordinates and a system of coordinates that is rotated relatively to . Let us consider a point that has coordinates on and coordinates on .

In general it is obvious that , and that .

Since the transformation from to is just a rotation we can assume that the transformation is linear. Hence we can write explicitly

Another way to write the three previous equations in a more compact way is:

In case you don’t see how the previous equation is a more compact way of writing the first equations I’ll just lay out the case.

Now all that we have to do is to sum from to and we get the first equation. For the other two a similar reasoning applies.

If we want to make a transformation from to the inverse transformation is

The previous notation suggests that the indexes can be arranged in a form of a matrix:

In the literature the previous matrix has the name of rotation matrix or transformation matrix.

** — 1. Properties of the rotation matrix — **

For the transformation

Where is a matrix known as Kronecker delta and its definition is

For the inverse transformation it is

The previous relationships are called orthogonality relationships.

** — 2. Matrix operations, definitions and properties — **

Let us represent the coordinates of a point by a column vector

Using the usual notation of linear algebra we can write the transformation equations as

Where we define the matrix product, , to be possible only when the number of columns of is equal to the number of rows of

The way to calculate a specific element of the matrix , we will denote this element by the symbol is,

Given the definition of a matrix product it should be clear that in general one has

As an example let us look into;

With

and

We’ll say that is the transposed of and calculate the matrix elements of the transposed matrix by . In a more pedestrian way one can say that in order to obtain the transpose of a given matrix one needs only to exchange its rows and columns.

For a given matrix it exists another matrix such as . The matrix is said to be the unit matrix and usually one can represent it by .

If , then and are said to be the inverse of each other and , .

Now for the rotation matrices it is

Where the second last equality follows from what we’ve seen in section 1.

Thus .

Just to finish up this section let me just mention that even though, in general, matrix multiplication isn’t commutative it still is associative. Thus . Also matrix addition has just the definition one would expect. Namely .

If one inverts all three axes at the same time the matrix that we get is the so called inversion matrix and it is

Since it can be shown that rotation matrices always have their determinant equal to and that the inversion matrix has a determinant we know that there isn’t any continuous transformation that maps a rotation into an inversion.

** — 3. Vectors and Scalars — **

In Physics quantities are either scalars or vectors (they can also be tensors but since they aren’t needed right away I’ll just pretend that they don’t exist for the time being). These two entities are defined according to their transformation properties.

Let be a coordinate transformation, , if it is:

- then is said to be a scalar.
- for , and then is said to be a vector.

** — 3.1. Operations between scalars and vectors — **

I think that most people in here already know this but in the interest of a modicum of self containment I’ll just enumerate some properties of scalars and vectors.

- is a vector.
- is a scalar.

As an example we will show the second proposition 5 and the reader has to show the veracity of the last proposition.

In order to show that is a vector we have to show that it transforms like a vector.

Hence transforms like a vector.

** — 4. Vector “products” — **

The operations between scalars are pretty much well know by everybody, hence we won’t take a look at them, but maybe it is best for us to take a look at two operations between vectors that are crucial for our future development.

** — 4.1. Scalar product — **

We can construct a scalar by using two vectors. This scalar is a measure of the projection of one vector into the other. Its definition is

For this operation deserve its name, one still has to prove that the result indeed is a scalar.

First one writes and , where one changes the index of the second summation because we’ll have to multiply the two quantities and that way the final result can be achieved much more easily.

Now it is

Hence is a scalar.

** — 4.2. Vector product — **

First we have to introduce the permutation symbol . Its definition is if two or three of its indices are equal; if is an even permutation of (the even permutations are , and ); if is an odd permutation of (the odd permutations , and ).

The vector product, , of two vectors and is denoted by .

To calculate the components the components of the vector the following equation is to be used:

Where is shorthand notation for .

As an example let us look into

where we have used the definition of throughout the reasoning.

One can also see that (this another exercise for the reader) and that .

If one only wants to know the magnitude of the following equation should be used .

After choosing the three axes that define our frame of reference one can choose as the basis of this space a set of three linearly independent vectors that have unit norm. These vectors are called unit vectors.

If we denote these vectors by any vector can be written as . We also have that and . Another way to write the last equation is .

** — 5. Vector differentiation with respect to a scalar — **

Let be a scalar function of : . Since both and are scalars we know that their transformation equations are and . Hence it also is and

Thus it follows that for differentiation it is .

In order to define the derivative of a vector with respect to a scalar we will follow an analogous road.

We already know that it is hence

where the last equality follows from the fact that is a scalar.

From what we saw we can write

Hence transforms like the coordinates of a vector which is the same as saying that is a vector.

The rules for differentiating vectors are:

The proof of these rules isn’t needed in order for us to develop any kind of special skills but if the reader isn’t very used to this, then it is better for him to do them just to see how things happen.

That first set of equations doesn’t make sense. How can

? A scalar is being set equal to an ordered triple.

This is standard function notation. It means that the scalar is a function of three variables: , and .

Ah, then it makes sense. Not sure why I didn’t see it that way, but perhaps I was too tired.

The definition of scalar that is given seems bizarre, but I see from researching it a little that it’s a definition unique to physicists.

In fact one also distinguish scalars from pseudo scalars (just like one can distinguish vectors from axial vectors) but this isn’t needed for the time being. Anyway, what’s the definition of scalar you’re used to?

The definition of scalar you’re using is ultimately equivalent, I think, to the definition I’m used to. But, as you may already know, if you have a field , a set of objects is called a vector space over if there is a function and a function that satisfy a collection of axioms:

1) For all , .

2) For all , .

3) There exists some such that for all .

4) For all and , .

5) For all and , .

6) For all and , .

7) For all , and .

Anyway, long story short, the elements of the field are called “scalars”. Your definition says a scalar is a physical quantity that is invariant under the operations of coordinate system rotations and translations. And that is interesting, because then it certainly makes sense to say that speed is a scalar quantity and velocity is a vector quantity. I probably knew these things once, when I was a physics student.

Oh dear. It doesn’t seem like I can edit my comment to fix Axiom 4:

4) For all and , .

Anyway you get the idea. 😉

Yes I’m used to the linear algebra definition of a scalar (let me just put it this way…).

These series posts on classical Physics will serve mainly to fix some terminology and get people used to physical reasoning.

If you want you can also post the solution of some (all) of the problems I left as an exercise.

Next week I’ll follow through with one more post of more of a mathematical content and then we’ll enter real Physics.

Ps: I’ll edit your post for you.

And of course, the function is the “scalar multiplication function,” which takes in a scalar and a vector , and returns an output :

.

Normally is written as

In physics, naturally, the field is usually the set of real numbers .

I guess what I’m doing here is laying down some of the basic properties of a vector space for anyone out there who needs a refresher. Quite possibly everyone in the Quantum Gang is already on top of this stuff.

Some of the members may not be, so if you want to make a post about it go ahead. It may be a little bit

too mathematicalbut I don’t see anything wrong with that.When we get to Quantum Mechanics we’ll also see some revisions of Linear Algebra but it’ll be under a Physicist’s optics and we’ll use Dirac notation. Hence a more natural introduction to the subject might be a good idea.

Ps: Don’t forget to send me your introduction text.

Your write-up above is quite nice, by the way.

Heavily inspired in Marion’s book on Classical Dynamics.

Edit: I also think that it comes across that way because of the organization that allows one to use.

Yeah I see the equality more as a declaration, for the sake of clarity, that the point sitting in it’s own little coordinate system can be expressed in terms of the three variables .in some other coordinate system. Trivial perhaps but we could also have said something like, say, for example.

I think what has been presented is a good start, I had a slight issue with the fact that I am used to transforms including shears and dilations yet here we consider only rotation matrices (which seem to be synonymous with transformation matrices) but overall good stuff.

I’ve edited your comment because you should have written $1atex (l instead of 1) instead of $itex.

Anyway guys plenty of exercises for you to solve. 😉 😛

Edit: Isn’t it ironic (I’m a positivist not a normativist (actually I’m not, but bear with me)) that my own post also needs an edit?

Apologies…I shall make a better effort at latex typing next time!

Sorry, I will look into this later, although at a glance it looks pretty straightforward as a refresher.

Looking forward to the next installment… 🙂

It’ll happen today or tomorrow.

But there are some exercises that are waiting for a solution…

Where are these exercises at?

The ones I’ve said that are left as an exercise for the reader. Two of them I’d consider to be

mandatoryand one is inessential but still important if you’re in this.A couple of questions:

1.) In the first section when you’re talking about coordinate transformations, your summation notation seems to imply that if lambda is the transformation matrix from one coordinate system to another then the transpose of lambda is the inverse transformation. Unless I am misinterpreting or misunderstanding what you are saying, this does not seem obvious to me.

2.) In section 2 you say that if two matrices commute then they are inverses of each other. This again is not obvious to me. Can you clarify?

Even though you have more or less answered your questions (and no they don’t make you an idiot they show that you have interest in what you’re reading) let me just add a few more comments:

1.) This has to do with the fact that the are actually the cosines of the axis between the two frames. A lot more can be said about this but frankly I didn’t want this post to be too long and I’m hoping that any major gaps will be filled by the other members and/or interested readers.

2.) In this I really don’t have much to say, but that part of the post really wasn’t that clear. I mixed in bits about matrices in general and bits about matrices that represent rotations. Thanks for pointing that out.

I’m an idiot, and should have read the whole post before commenting. This is a post about rotations, not general transformations. Answered my own question. Sorry.

I’m afraid I don’t understand the significance of the Kronecker delta matrix…and I bet once I look more deeply at some other things similar knowledge gaps will appear, but for now I’m hope you can just roll with the punches.

After doing a bit more reading, I also feel that its understanding is crucial to the completion of the excersise in section 3.1? Obviously however I could be wrong.

Sorry for the time but I have my hands full in the moment. I think that in this weekend I can give you a decent answer and I’ll also post the follow up post to this so that we can get our blog moving.

The solution of this exercise is very similar to the solution presented in the text. You just have to remember what a scalar is in our context.

The significance of Kronecker delta is mainly of simplification. In this case it expresses the fact that the direct transformation and the inverse one are orthogonal. This means that if you apply the direct transformation to a vector and then apply the inverse transformation to the resulting vector you just get the vector you started with. Which is just as it should be.

[…] the last post we took our first step in the mathematical introduction to classical Mechanics. In this post […]

We still can not quite believe I could end up being one of those studying the important ideas found on your website. My family and I are really thankful for the generosity and for offering me the advantage to pursue my own chosen career path. Many thanks for the important information I acquired from your website.

[…] introducing some mathematical machinery with our first and second posts it is now time for us to look into some Newtonian Physics, after a brief look into […]

I am struggling to understand the Kronecker delta. Is it possible to give an example?

Hi Timothy,

The Kronecker delta is a function of two variables which is equal to 1 when the two variables are equal and equal to 0 when the variables are different.

So an example can be: if the variables are equal and if the variables are different:

[…] to and the vector product of two parallel vectors is by definition (Do you see why? If not go to this post). see the definition of vector product and prove the previous […]

[…] Since and are scalar functions is also a scalar function. Therefore is an invariant for coordinate transformations. […]