Monday, May 25, 2009

Symmetry

Oh, right, so why the title "Spontaneous Symmetry"? At that rate, what exactly do I mean by symmetry?

The concept of symmetry is one of the most important in physics. It can dramatically simplify problems, and often it is the only road toward an answer. The symmetries that are used in modern physics are often very abstract and esoteric, and are described by the beautiful field of mathematics known as group theory. The goal of group theory is to find general rules and patterns for systems that frequently appear in nature, math, physics, computer science, and nearly every field. Group theory describes things as ranging as a Rubik's cube, the energy levels of the hydrogen atom, cryptography, and countless other applications. But let's start off slowly.

We are all familiar with things that are "symmetric" in the layman sense. Something is symmetric if I can draw a line through it and it is the same on both sides. People are nearly symmetric if you draw a line through their head, torso, and between their legs, splitting their body in twain. For the sake of description, let's imagine a 2-d drawing of a human that indeed is perfectly symmetric. I can cut the drawing in half, flip one side over, and it will perfectly match the other side. A mathematician would describe this sort of mirror symmetry in the following way: I can take every part of the person, each arm, its hands, its fingers, its feet, etc, and I can reflect them to the opposite of the person and I will end up with the exact same image of the person.

One can imagine that if the person were holding a cane in his left hand and not in his right hand, he would no longer be symmetric. If I were to preform the above operation and switch the left and right sides, the cane would now be in the person's right hand. So, by swapping left and right, I DON'T get back the same image. The cane in this example breaks the symmetry.

Mathematically, this symmetry is described by taking things at position x and moving them to position -x (if we assume that the origin is aligned with the center of the person. In physics, this is called parity. The person that we imagined above (without the cane) is "symmetric under parity" or is "symmetric under mirror symmetry" or is "symmetric under reflections" or whatever term you want to use to describe it.

The point is that the symmetry that is most common to us is only a very simple example of symmetries. In general, a symmetry is defined as an operation that I can preform on an object that will leave the object the same. Consider a perfect circle. Certainly this circle also possesses the mirror symmetry that we described above. I can draw a line through the circle and flip each half over this line and the circle will be unchanged. But for the person, there was only one such line that I could draw. For the circle, there are infinitely many. The symmetries of the circle, we are starting to see, are much more rich and interesting than those of the person.

Aside from mirror symmetry, we are also free to rotate the circle. If I place my finger in the middle of the circle and spin the circle, it will appear unchanged (assuming the circle is indeed perfect). No matter how much I rotate it, I won't do anything. The circle, we say, is symmetric under rotations. Notice how this group of symmetries is different than the reflection symmetry from before. The act of reflecting is in a sense "discrete." You either do it or you don't. It's like being pregnant, you either are or you aren't. But rotations are "continuous." You can rotate by any amount, including none at all or any arbitrarily small or large amount.

Each of the sets of actions that we can do that leave an object the same form a "group." The set of rotations that we can do to a circle that leave it the same form what is known as the "rotation group" (specifically, for the mathematicians in the audience, it is called U(1), or equivalently SU(1) or O(2). This is also equivalent to the real numbers modulo the integers, and a fun exercise would be to find the isomorphism). The defining feature that makes these "symmetries" a "group" is the fact that any two symmetries done in succession make another symmetry. For example, if I rotate by 30 degrees and then by 80 degrees, it is the same as rotating by 110 degrees. It's as simple as that. I can also rotate backwards (ie rotating by -90 degrees is the same as rotating by a right angle in the opposite direction as 90 degrees), or I could just not rotate at all (identity).

Groups can be as simple as reflecting things and rotating squares and circles, but they can also be complicated. The more abstract ones involve matrices and complex numbers, but it's not necessary to bring them up in order to understand the main concepts.

So, how does any of this apply to physics? It turns out that for each continuous symmetry of a system, there is a corresponding "conserved" quantity. For those unfamiliar with this term, a conserved quantity is something that remains the same no matter what we do. For example, charge is conserved in the sense that you can't create charge, you can only move it. Energy is conserved, and you can only change its form, meaning heat is created when I use my laptop's battery. Momentum is conserved, meaning if I throw something in space in one direction, I will go flying in the opposite direction.

All of these conserved quantities can be derived from a symmetry of nature. The mechanism for doing this is one of the most beautiful theorems in all of physics. It's called Noether's theorem, which was first discovered by Emmy Noether (seen below). Einstein apparently called her the most important woman in the field of mathematics. It says what I just said, that each continuous symmetry leads to a conserved quantity (it just says it in fancy math language).





Conservation of energy comes from the fact that the laws of physics are constant in time (meaning if I preform an experiment at 3:00 in the morning, it should have the same results as if I did it at 5:00 in the afternoon, ignoring external differences associated with the different times of day such as daylight, weather, etc). Conservation of momentum comes from the fact that the laws of physics don't care where they take place (meaning they are the same here as well as in the Andromeda galaxy, or they are "symmetric under translations").

Conservation of charge comes from the fact that a rotation of electron fields, similar to the circle rotation above, leaves the energy of the system unchanged. This is of course mostly nonsense to those who aren't familiar with Quantum Field Theory, but I assure you that it's no more difficult than spinning a circle about its center.

Tuesday, May 19, 2009

Two Envelopes: Answer

I'll start off by addressing the comments.

Just to reiterate, the paradox is the idea that for all possible values that we find in the envelope, switching is advantageous. To Erik, this clearly makes no sense because there is nothing that distinguishes the envelopes. This is different from a situation where we have a particular value, say $100, and are given the option of switching for either $200 or $50 with equal probability. Clearly in that case switching is advantageous.

To Mr. Zrake's argument, you have shown that having a strategy of always switching will not be advantageous. This is different than the idea that once we've opened the envelope and seen a particular value, we always want to switch. Let me explain with an example. Imagine that I tell you in advance that the envelopes contain exactly $50 and $100 and you get one envelope with a 50/50 chance. Clearly, in this case, a strategy of always switching is clearly going to have the same expectation value as a strategy of always staying, which is just the mean of the two envelopes. However, if you open the envelope and see $50, you should always switch and if you see $100 you should always stay.

The last example I think is a big hint toward the end of the paradox. See, if we know in advance how the envelopes are filled, then our strategy should be clear. Let's go back to the beginning. We haven't discussed how the person decides how much money to put in every envelope. The only constraint thus far is that one envelope should have twice the money that the other has. When we open a particular envelope, we know that the other has either twice the money or half the money. The key, however, is that there isn't a 50/50 chance of it either being half or double as one would naively expect. The relative probability between half and double depends on how the envelopes are filled, ie it depends on the probability distribution that the filler uses to pick what amount of moeny goes into the envelopes.

And here's the real key: It is impossible to have a probability distribution where, for every x that we see in the envelope, there is a 50% of the other having half and a 50% of it having double.

To be more concrete, let's say that the filler chooses to fill the envelope in the following way: he has some probability distribution p(x) that he uses to randomly fill one envelope. He then fills the other envelope with double that amount, and flips a coin which determines whether he gives us the larger or the smaller envelope.

So, if we open our envelope and see X dollars, it means that one of two things happened. Either the "seeded" number was X and we got the smaller envelope, or the seeded number was X/2 and we got the higher number. The expectation value of switching when we see X in our envelope is:

EV(switching) = N * { p(X)*2X + p(X/2) * X/2}, where N is a normalization constant such that the probability adds up to 1.

Clearly, this expecation value depends intimately on the probability distribution function p(X). If we were to know explicitely this function going in, then we could determine the proper strategy by comparing the above expectation value to X. If we DON'T know this function going in, which is the case presented in the original problem, then we CAN'T come up with the optimal strategy. However, that doesn't mean that ALWAYS IS correct to switch, it just means that we can't determine the correct answer because we don't have enough information.

Friday, May 15, 2009

Two Envelopes Paradox Question

A man comes up to you on the street that you've never met. He generously offers to play a game with you. He says that he's going to take two envelopes and fill them with money. One of the envelopes is going to contain twice as much money as the other, but he doesn't tell you what either amount will be. He will then give you one of the two envelopes randomly with a 50/50 probability of getting either. He will let you look in it, and decide if you want to keep that money or if you want to switch envelopes and be forced to keep the amount in the envelope.

Let's say that you open the envelop and see $100. Should you stay and be happy with your envelope, or should you switch and potentially get $200 or end up with $50.

Well, half the time you have the higher envelope and half the time you have the lower envelope, so your expected value for switching is:

E = (1/2) * $200 + (1/2) * $50 = $125

Your expected value for switching is more than the $100 that you would always get by staying. So, based on the math, you should switch envelopes. Easy problem, right?

But, if you think about it, the situation makes absolutely no sense. Nothing was special about $100. For any amount X, the expected value of switching is 1.25*X, so we should conclude that we should always switch when we are given the envelope. Thus, we can mathematically switch envelopes even before we look. The paradox lies in the fact that there is a natural symmetry between the envelopes. Since we no nothing special about either one, how could it possibly be preferable to switch? They are both equally likely to be the bigger one, so we should really break even by switching. It should gain us no advantage. But the math is simple and clear. So, what's going on here.


This is known as the two-envelope paradox. It's pretty interesting, I think, and it really stumped me for a while when I was first thinking about it. The solution is somewhat non-trivial but is also enlightening. So, I encourage you to think about the problem and to see if you can figure out what's wrong here. I'll post my take on the solution later.


Incidentally, Wolfram Alpha is now online, and I'm playing with it a bit.

Monday, May 11, 2009

Wolfram Alpha

This has been getting a lot of hype recently. It remains unclear whether Stephen Wolfram's ambitious new project will live up to expectations and change the way we use the Internet or whether it will be yet another overblown flop that we check out once before returning to Google.

Wolfram is a very ambitious man. He is a renaissance man, and his work spans the the fields of mathematics, particle physics, cosmology, and most notably computer science, symbolic algebra, and cellular autonomy. Wolfram is best known for his development of a fantastic computer program known as Mathematica. In a nutshell, Mathematica is the world's greatest calculator. Really, it's a computer language used to preform symbolic manipulation (in other worlds, it does math on abstract objects and functions whose exact definitions may or may not be defined).

The idea that ties all of Wolfram's projects together is chaos theory. Wolfram is extremely fascinated by the concept of very complicated systems emerging from simple rules. [Really, all of nature is a complex system that emerges from a relatively small amount of rules. Of course, we don't yet know what these rules are. We know what they look like in certain energy regimes, but we don't know how many there are (there could be only one, or there could be infinitely many). It is the goal of physics to find these rules.]

Consider an anthill. Any particular ant is extremely stupid. They have a very primitive brain whose main job is to interpret sensory information, mostly in the form of smells, and convert this into one of several simple actions. If an ant smells a certain scent, it follows it. If the ant smells an egg with a certain pheromone, it moves the egg. If an ant smells another ant, it attacks that ant. One could write out a list of a dozen or so rules and one would fully define the ant, more or less. However, as a whole, the anthill is a very complex beast. These few, simple rules, when scaled over hundreds of ants, become a giant living brain capable of surviving, finding food, growing, digging, moving, and attacking. The anthill really is one giant brain and each ant acts as a neuron.

Birds fly in beautiful patterns in the sky. They flock as huge groups and always know where to fly as to not run into other birds. But really, all any individual bird knows is the following: if you see a bird of your type flying, fly behind it but slightly to the side. Flocks are the emergent behavior of this simple rule.

Okay, so back to Wolfram. From rules come complexity, from order comes randomness. This is exemplified by something invented by Stephen known as “Rule 30.”


Rule 30

Rule 30 is a simple rule for moving through a series of bits and changing each bit one by one based on the value of the bit and its nearest neighbors. In other words, one starts with a long string of 0's and 1's. One then goes to each 0 or 1 and changes it based on the following rules:

111 -> 0
110 -> 0
101 -> 0
100 -> 1
011 -> 1
010 -> 1
001 -> 1
000 -> 0

There are 8 rules and they represent all possible combinations of three binary numbers.
In other words, if we come across a “1 1 1 x”, we change the middle number to 0 so it becomes “1 0 1 x” and then we move on to the next bit, which would be “0 1 x” and apply the rule again. We do this for as long as we like, and from these simple rules chaotic patterns emerge. Just as ants, birds, the very neurons in our brain, quarks, gluons, electrons, photons, and all other elementary particles run on some set of rules and yet create complexity.



Dr. Ian Malcom and Stephen Wolfram would certainly be good friends.

Wolfram Apha is designed as the next step in human-computer interaction (it takes us a very large step toward Skynet.). It is the computer system envisioned by countless sci-fi authors decades ago. You ask it a question and it gives you an answer; simple as that. (Good examples of stories based on this idea are “The Last Question” by Issac Asimov and EPICAC by Kurt Vonnegut).

The Last Question

EPICAC


It's supposed to come online this week, and I look forward to playing with it. And I'd say that there's only a 10% chance that when turned on, it will begin the war with the machines. And even if it does, we could always ask it to play tic-tac-toe against itself. That's every sentient computer's kryptonite.

Sunday, May 10, 2009

Boy or Girl

So, who doesn't love getting confused by probability? I know I do. Let's just jump in right now without further ado.


A couple has two children. Assume that when a person gives birth, they have a 50% chance of having a boy and a 50% chance of having a girl. The couple tells you that at least one of their children is a boy. What is the probability that the other child is a boy?

This is a pretty simple puzzle in terms of calculations, but it causes a lot of confusion in a lot of people, so I thought it'd be fun to address. Since the chances of having a boy or a girl is 50/50, one would naively assume that the knowledge about the first child doesn't effect the probability of the other being a boy or girl. Thus, most people say the answer is 50/50.

But this is of course wrong. If the couple tells you that they have at least one boy, there is a 2/3 chance that the other child is a girl. Weird, right? Remember, this has nothing to do with correlations between children. We are assuming that all births are independent of each other. So, why is this so. The easiest way to figure it out is by examining the ways that a couple can have two children and finding their probabilities. They are as follows (B = Boy, G = Girl):

BB 25%
BG 25%
GB 25%
GG 25%

Each of these have an equal probability (50% * 50% = 25%). If the couple tells us that they have at least one boy, than all that they have done is eliminated the last way of having two kids; meaning that we ignore the GG combination.

Thus, the remaining combinations are:

BB
BG
GB

and they occur with equal probability. Of the remaining choices, two of them involve a boy and a girl, and the other involves two boys. Thus, it is twice as likely to have a boy and a girl than two boys. Thus, if a couple tells you that they have at least one boy, it means that 2/3's of the time, their other child is a girl. This is really a problem of semantics. Most of people's confusion comes from the idea of having "at least" one boy.

We get a different answer if we phrase the question in the following way: "A couple has two children. The youngest child is a boy. What is the probability of the sex of the other child?" Here, the answer is 50% boy and 50% girl. So, what's the difference? Again, it becomes clear if we list the possibilities:

BB
BG
GB
GG

If they tell us that the younger child is a boy, we are only left with:

BB
BG

and these have equal probability. Thus, the second child is a boy half the time and a girl the other half.


Interesting, no? This is a relatively simple problem. Maybe we'll get some harder ones in the future...

Friday, May 8, 2009

Uncertainty

So, let's talk about quantum mechanics for a bit. Specifically, let's talk about what is known as the Heisenberg Uncertainty Principle.

Jane Goodall didn't study chimpanzees. Yes, lived in Africa, entrenched herself in the surroundings, and famously detailed the comings and goings of a group of chimps. But it was impossible for her to study chimpanzees in a general way. Instead, what she really studied were a group of chimpanzees who were being studied by Jane Goodall. Whose to say that the presence of this foreign homo sapien didn't completely alter the behavior of those primates. It's impossible for it to not have effected them in some way, and therefore her studies were skewed by her very presence. This is a common aspect of science. The act of experimenting alters that which is being studied.

I can not stress the following point enough. This is NOT the Heisenberg Uncertainty principle. This is what many people casually call the uncertainty principle, but the real version of the uncertainty principle is much more interesting. Yes, Goodall's presence had a back reaction on the chimps. But, given enough technology and enough ingenuity, she could have reduced the effect of her presence greatly. One could imagine an extremely well financed biologist who used extremely small flying cameras to study the chimps. These could be no smaller than a fly but could be equipped to take pictures, record sounds, and even intake smells. And they could be made sufficiently small as to not even be noticed by the chimps. Yes, they could in theory have a small effect on the chimps (one could accidentally eat one of these fly-cameras, which would change the behavior of the animal by making it temporarily sick or something). But with sufficient resources, one could make the cameras smaller, or change their algorithms so that they would avoid the chimpanzees better, or that they would taste like real flies, or whatever. The point is that in this example, one can reduce the influence of studying the chimps to an arbitrarily small amount. And if one is unsatisfied, one can always make it smaller.



The uncertainty principle doesn't say that measuring something effects the thing that you are measuring. This is obvious. The uncertainly principle says that, when we're talking about measuring really small things (molecules, atoms, electrons, light, etc), we can't just keep making our experiment better and better to minimize its influence on the thing we are studying. Heisenberg discovered that there was an inherent limit on the extent to which we can measure something without changing it very much. When we reach a certain point, no matter how rich or clever we are, any experiment that makes a better measurement will effect the thing we are measuring more.




To take a specific example, consider a thought experiment known as "Heisenberg's Microscope." Imagine that we have an electron (a small charged particle) that is moving in a straight line, and we want to measure it's position by shining a light on it. Imagine we fire the electron in a way so that we know it is traveling in a straight line with a certain speed (old TV's did this using Cathode Ray Tubes). We then have a beam of light that we shine on about where we think the electron will be, and we'll be able to tell it's position based on how the light reflects. It turns out that in order to get better resolution of the position of the electron, we have to use more energetic light (ie light with a smaller wavelength). But if we use more energetic light, the light will "kick" the electron harder when it bounces off of it and wall cause more of a change in the electron's speed.

So, there is a trade off between how well we know the electron's position and how well we know its speed after we measure it using the light. To know it's position better, we kick it harder using more energetic light and therefore know its speed less well. Heisenberg quantified this effect and said:

(Uncertainty in position)*(Uncertainty in momentum) > h,

where h is called Plank's constant. This means that if we want to minimize the uncertainty in position, the uncertainty in momentum must go up (momentum is just speed times the mass of the object, so it is more or less the uncertainty in an object's speed).

This is how I like to visualize this effect. Say we have a small particle trapped in a box. If it's a really big box, we don't really know where the particle is inside the box, we only know that it's somewhere inside there. Our uncertainty in its position is equal to the size of the box. Now, imagine that we keep making the box smaller and smaller. We still don't know exactly where the particle is inside the box, but as the box gets smaller, the range of places that it can be gets smaller and smaller. So, our uncertainty gets less and less.

Heisenberg tells us that in this situation, the particle MUST start to move, and as the box gets smaller, it starts to move faster and faster. Why is this so? Well, we KNEW that the particle always was stationary, we would know its speed exactly, meaning that our uncertainty of its speed would be zero (or very small). And we could then make the box smaller and smaller and our uncertainty on its position would become arbitrarily small. But Heisenberg says that this is impossible; something's got to give. As we make the box smaller, we can no longer be sure that the particle is just sitting there stationary. Our uncertainty in its velocity must increase, meaning it's bouncing back and forth inside the box.

As we make the box really, really small, it starts bouncing faster and faster. You should now appreciate that this is a much more profound statement than the simple fact that Chimps were aware of Jane Goodall.

Thursday, May 7, 2009

WHY

So, from time to time, I’m confronted with the question as to why we are spending a lot of resources trying to blow up protons and see funny things called the Higgs Boson. After all, many countries, including the United States and several in the European Union, have contributed over 8 billion dollars to making a Swiss tunnel and a few big, scary looking devices to surround parts of this tunnel. So, why are we doing this? To this question, I have several answers, and any particular answer may depend on my mood. These answers include:

The Angry One
-Screw you, I don’t go around criticizing your job and its influence on humanity. So don’t do the same to me, ya bastard.

The Pragmatic One
-Over the years, particle physics, and CERN especially, have contributed greatly to technology, though often as a side effect. In fact, the very first computer server was setup at CERN (I saw it, and there was a note on it saying, “This is a ‘server,” do not turn off”). CERN is famously the epicenter of many major developments that directly led to the internet, including early innovations in TCP/IP protocol and the developments of HTTP and HTML. So, without particle physics, there would be no internet (maybe).

The Scientific One
-No one knows at the time what advances in science will lead to technological advancement long in the future, and thus physics should be insulated from concerns about its direct applicability. No one thought that some odd man thinking about magnetic waves would lead to special relativity, that special relativity would lead to general relativity, and that both would forever change our world (I guess the direct applications of relativity to our everyday lives are limited, but people often cite the need to consider GR corrections when using GPS). Certainly no one thought that a group of scientists trying to figure out certain oddities of radiation and light would lead to the strange theory of Quantum Mechanics, which is the theoretical basis for every technological advance in the 20th century. So, it’s impossible to say which of today’s seemingly esoteric theories will become necessary for technological advancement in the future.

The Edmund Hillary One
-I was going to try to paraphrase a bit, but I decided that it’s best just to quite Sam Seaborn directly:

Because it's next. For we came out of the cave, and we looked over the hill, and we saw fire. And we crossed the ocean, and we pioneered the West, and we took to the sky. The history of man is hung on the timeline of exploration, and this is what's next.

Theoretical physics is the latest in a series of frontiers. The LHC is our ship, and ATLAS is our spyglass, and sometime in the next few years we will look forth and see dry land.

Speaking of the West Wing:

Theory of Everything

ATLAS

So, I’m a graduate student of physics at NYU. I am studying experimental particle physics. The meaning of the word “particle” has changed dramatically over the course of the last century. In layman’s terms, a particle is a small object, such as a dust particle or a particle of sand. In terms of physics, the term particle refers to the smallest objects that we are aware of, and usually those objects are fundamental (meaning that they’re not composed of other objects). Particles used to be molecules, then they were the atoms that made up the molecules, and then they were the protons, neutrons, and electrons that make up the atom. Now, in addition to the last three mentioned, they include a whole zoo of odd sounding objects that fly through space, come out of nuclear decays, and can be created in a collider.







I work on an experiment known as ATLAS, which is one of several experiments that uses the Large Hadron Collider (LHC), which is located at a laboratory in Switzerland called CERN (the European Center for Nuclear Research). The LHC is a particle accelerator and collider (the biggest one in the world). It uses huge magnets to bend protons around a 27 kilometer long ring which is located 100 meters below the border between Switzerland and France. The protons when traveling around the LHC move at nearly the speed of light and have an energy of 7 Tev (Terra electron volts, meaning 10^12 electron volts). Each protons moves around the ring more than 10,000 times every second. These protons move in both directions around the ring, and at certain points they are made to collide with each other. They move in groups of about 10^11 protons called bunches, and at an interaction point bunches collide with each other once every 25 nanoseconds (which is pretty often). Even though the beam becomes very focused, most protons miss each other, and at each bunch collision, only about 50 of the 10^11 protons actually hit one another and interact (though most of these interactions are very boring).




ATLAS is a giant machine that it located at one of these places where the beams of protons collide. It is a detector; it has nothing to do with the actual acceleration of protons nor with their collisions. More or less, it’s a giant camera that has many components which are designed to take pictures of different types of particles. When protons hit each other at the center of ATLAS, they “blow up” and send all sorts of thing flying out in all directions. Because of relativity and quantum mechanics, a collision of protons can create particles that weren’t there to begin with. So, even though we began only by throwing protons at another, we could end up with electrons, muons, protons, neutrons, light (photons), or many, many other particles whose names I don’t care to list here. ATLAS is designed to be a sort of catch-all. It is a discovery machine, and the goal is that we will create particles that we’ve never seen before and ATLAS will detect them.

Spontaneous Symmetry

Apparently, ever since the rise of the internet, people have grown the unquenchable desire to share the every minutia of their lives with total strangers. I both mock and sympathize with this tendency. And now I shall go one step further and embrace it. Okay, hopefully this blog will be more interesting than a detailed discussion of what cereal I had in the morning (it was Puffins, by the way).

I have no idea if this will last and I highly doubt that anyone will read it (rather, I doubt that anyone will read it with anything more than the feigned interest that comes with doing anything to avoid actually working). And I don't intend for it to be deep, profound, or interesting in any way (maybe with fleeting exceptions, but I assure you that they will be unintentional).

I don't plan to have any sort of strict theme, but I would like to talk about physics every now and then since that's what I do with my life and I rarely get to talk about it to people outside of the physics community. But, please, don't let that frighten you. I know you "only too physics in high school" or "had to take physics in college, but did horribly in it" or "lol phyiks wat?!?!!1Eleven!", so I'll try to make it simple and interesting.

Okay, so, as Jeff Goldblum as Ian Malcolm said in Jurassic Park, "Well, there it is."