Home Tech Support

  • Subscribe to our RSS feed.
  • Twitter
  • StumbleUpon
  • Reddit
  • Facebook
  • Digg
Showing posts with label college. Show all posts
Showing posts with label college. Show all posts

Tuesday, 3 September 2013

Seventh Semester at College

Posted on 19:32 by Unknown
How did I become a senior? It doesn't feel like orientation and freshman year happened that long ago.
Tomorrow is the first day of class for the 2013 fall semester. I'll be taking 8.07 — Electromagnetism II, 8.09 — Classical Mechanics III, 8.333 — Statistical Mechanics I (a graduate class), and 14.12 — Economic Applications of Game Theory. I'm looking forward to all of these classes along with continuing my UROP (which may transition sooner or later into a new project as I wrap up my current one). The bigger things I have to deal with though are graduate school applications and the Physics GRE. The latter will be over in a few weeks. The former will be going on until around the beginning of December, but I hope to be done a while before that. Hopefully this semester goes well. Good luck to everyone else for the start of their school year/job/whatever else!
Read More
Posted in class, college, MIT, semester, UROP | No comments

Friday, 16 August 2013

Reflection: 2013 Summer UROP

Posted on 13:33 by Unknown
Wow. This summer has been incredibly busy, productive, and fun all at once. I can't believe it's already over!

So what did I do this summer?
My primary concern this summer was my UROP. I have been able to bring it very close to an end point; I wasn't able to finish it up completely, but I guess that was an unrealistic expectation because that's just not how science works. It doesn't wrap up cleanly; it's an ongoing process. I learned a whole lot more about Scheme and MEEP in the process, though, which was great.
On a related note, another UROP project fell by the wayside (as I wrote about earlier this summer) once I realized it was based on flawed calculations. To be honest, I'm not really sure if I want to pick up that project again and try to bring it to some sort of conclusion or if that's really worth my time.
My secondary concern was preparing for graduate school. I took the GRE this past Tuesday, and I am happy to say that went quite well. I have also been studying for the Physics GRE, along with making my list of graduate schools/programs/professors that I want to further investigate and send applications.
My tertiary concern was making another video for the MIT-K12 project. That went off successfully as well.

Apart from that, not being around my usual set of friends for the summer had a silver lining. While I would have certainly liked to have been able to hang out with them more, I was able to become a lot closer to a few people who usually live on my floor during the semester and hang out with them a lot more. Compared to the end of last semester, where I would basically just say "hi" to them but not a whole lot more, I now intend to hang out with them significantly more during this coming semester.

What didn't I do? These things didn't happen because I didn't have the time or energy to carry them out.
I wasn't able to edit and publish all the videos I took of 8.033 lectures from 2 years ago. In fact, I couldn't really look at those at all.
I wasn't able to do much work for OCW as I had planned.
(Actually, that's mostly it.)

I'm excited for the coming semester. My classes all look quite exciting, and I'm still deciding what I want to do regarding my UROP once my current project can truly said to be concluded. That said, I feel a bit sad that this has been my last summer at MIT, and it is already over. After that, I only have 9 more months at this place. I hope I can make those 9 months really special. Before that, though, I'll be going on a vacation with my family for a few days and then spending the remaining 1.5 weeks of August at home. Yay!
Read More
Posted in break, college, crystal, internship, MIT, photonic, physics, quantum mechanics, Reflection, science, UROP | No comments

Friday, 24 May 2013

Done with 6th Semester!

Posted on 09:29 by Unknown
I'm done with junior year! The spring semester was a bit more manageable than the fall semester, but was still challenging nevertheless. I intentionally chose to take only 3 classes: 8.06 — Quantum Physics III, 8.14 — Experimental Physics II, and 14.03 — Microeconomic Theory and Public Policy. I did this so that I could spend more time on each of those classes (especially 8.14 — Experimental Physics II) as well as on my UROP. Speaking of my UROP, things were progressing rather slowly in the beginning of the semester and only slowed further from there, until just after spring break, at which point progress went extremely quickly. I'm really looking forward to being able to make more such progress in the summer; plus, I may even be able to start on a new project about the Casimir effect, about which I wrote a paper for 8.06 — Quantum Physics III. Before that, I'm spending two weeks at home. For these next few days, I'm just going to relax and spend time and travel with family. After that, I'll probably be able to start work again on my UROP; a few weeks into the summer, I intend to start looking seriously into graduate programs in physics. Anyway, at last, it is summer!
Read More
Posted in break, college, MIT, semester, UROP | No comments

Monday, 6 May 2013

Expected Utility in Quantum States

Posted on 07:12 by Unknown

Last semester, 14.04 — Intermediate Microeconomic Theory covered choice theory under uncertainty; at the same time, I was taking 8.05 — Quantum Physics II, where we had talked about 2-state systems and the formalism of quantum states being complex vectors in a Hilbert space, and as choice under uncertainty discusses how consumers make choices based on states of the world, I thought it would be cool to extend it to quantum states, but I wasn't sure how to do that at that time. Now that 14.03 — Microeconomic Theory and Public Policy is talking about choice under uncertainty as well this semester, and now that I have had some time for the stuff about 2-state systems to simmer in my head, I think I have a slightly better idea of how to think about extended the states of the world to include the superpositions as allowed by quantum mechanics.

One of the simplest examples would be the 2-state system. In the class I am taking now, a typical 2-state system would be a fair coin which can either take on values of heads or tails (each with probability $\frac{1}{2}$). What might this look like in quantum mechanics? We could replace the coin with the spin $S_z$ of an electron. A state of the electron corresponding to being measured as $|\uparrow_z \rangle$ or $|\downarrow_z \rangle$ each with equal probability could be labeled as $|\psi \rangle = \frac{1}{\sqrt{2}} \left(|\uparrow_z \rangle + |\downarrow_z \rangle\right)$. This indeed gives equal probabilities of being measured as spin-up or spin-down in the $z$-direction. The problem is that this state is exactly $|\psi \rangle = |\uparrow_x \rangle$, so the probability of the state being measured in the $x$-direction as spin-up is unity. This leads to issues in trying to reconcile the interpretation of payoffs for different states of the world; this particular state would pay $w(|\uparrow_z \rangle)$ or $w(|\downarrow_z \rangle)$ with equal probabilities until $S_z$ is measured, but would pay $w(|\uparrow_x \rangle)$ with certainty as $S_x$ has essentially already been measured, collapsing a previously unknown state into this one. So there seems to be an issue with trying to stuff the interpretation of probabilistic measurement of a state of the world into the idea of superposing quantum states, as certain measurable states of the world do not commute ($[S_i, S_j] = i\hbar \epsilon_{ijk} S_k$ for instance). So what can be done now?

Recall that each state of the world $j$ has an associated probability $\mathbb{P}_j$. Yet, once a state is measured, those probabilities are meaningless, because a state becomes observed or not observed; it cannot be "observed in fraction $\mathbb{P}_j$" or anything like that. Taking the frequentist view, these probabilities are only meaningful in the large ensemble limit; after a large number of observations, the fraction of those observations corresponding to the state $j$ converges to $\mathbb{P}_j$. This is exactly the same rationale behind the density matrix, which describes the state of an ensemble of systems which are usually not all prepared in the same quantum state, but in the large number limit, the fraction of those prepared in the state $|\psi_j \rangle$ is $\mathbb{P}_j$ such that $\sum_j \mathbb{P}_j = 1$. It is then defined as \[ \rho = \sum_j \mathbb{P}_j |\psi_j \rangle \langle \psi_j | . \] Note that in general $\langle \psi_j | \psi_{j'} \rangle \neq \delta_{jj'}$ because the states do not in general need to be orthogonal. Furthermore, different state preparations can have the same density matrix, in which case the "different" state preparations are actually physically indistinguishable if a quantum mechanical measurement is made.

So what does this have to do with the states of the world? If the example of a fair coin flip is again translated into measuring each eigenvalue of $S_z$ with equal probability, then the density matrix becomes $\rho = \frac{1}{2} \left(|\uparrow_z \rangle \langle \uparrow_z | + |\downarrow_z \rangle \langle \downarrow_z |\right)$. Now it is much more plausible to define a wealth $w(|\psi_j \rangle)$ dependent on the state of the world.

For example, let us now consider an unfair "coin" represented by the density matrix $\rho = \frac{1}{3} \left(|\uparrow_z \rangle \langle \uparrow_z | + 2|\downarrow_z \rangle \langle \downarrow_z |\right)$. What is the probability of measuring this system in the state $|\uparrow_x \rangle$? That would be $\langle \uparrow_x | \rho | \uparrow_x \rangle = \frac{1}{2}$. Similarly, $\langle \downarrow_x | \rho | \downarrow_x \rangle = \frac{1}{2}$. It is interesting to note that equal probabilities are achieved for spin-up and spin-down in the $x$-direction (and $y$-direction as well) for this system; the difference is that now there are off-diagonal terms in the density matrix describing the degree of interference between the states of $S_x$ in the two populations of spins.

What can we do with this? Payoffs now need to be defined over every plausible noncommuting observable in the Hilbert space (because commuting observables yield the same state, and the payoff is really defined for the state). Thankfully this is fairly simple for spins, as the noncommuting observables in question are the components of $\vec{S}$.

Let us return to the previous unfair "coin". Classically (so it would be a coin), the result of the coin flip would be the end of the story. Let us suppose that the consumer had a risk-neutral utility $V(w) = w$. Let us also suppose that there was a game which payed off 20 if heads and 0 if tails and cost 10 to enter. The expected payoff, accounting for the cost of entering, would be $\mathbb{E}(w) = -\frac{10}{3}$, so the consumer would prefer to not play.
Classically that would be the whole story. Quantum mechanically, though, there is more to the story. If the owner of the game decided to measure $S_z$, then the result would be the same as the classical result. If the owner decided instead to measure $S_x$ or $S_y$, then $\mathbb{E}(w) = 0$ so the consumer would be indifferent between playing and not playing, which is certainly a different outcome from preferring not to play. In fact, if the consumer does not know for sure what the owner decides to measure, then probabilities could be assigned to the measurement choice itself, and these could then be used to find the expectation value of payoff or utility over all possible choices, where each of those choices will have an expected payoff or utility as well.

One thing to look further at is how to extend this to include continuous ensembles. The example in class was how a state of the world might be the outdoor temperature measured in some range. The quantum mechanical equivalent might be having a continuum of systems, each infinitesimal one having a position inside a given range of allowed positions; the only issue with this is that matter is not continuous, so I don't think it is possible to have a density matrix in the form of $\rho = \int |\psi(s) \rangle \langle \psi(s)| \, ds$ for some continuous index $s$. However, that can certainly be further investigated, and maybe I'll do that next time. The point is that states of the world in analyzing expected utility can easily be generalized to include quantum states through density matrices, and the question of which observable to measure in a noncommuting set brings out some interesting behavior not seen in classical states.
Read More
Posted in class, college, economics, MIT, physics, quantum mechanics, science | No comments

Thursday, 25 April 2013

Magnetic Field Tensor and Continuum Optical Modes

Posted on 05:57 by Unknown
There is actually a third thing in this post, but I'm not going to list that in the title. Also, the two things in the title are separate and unrelated. Follow the jump to see it all.
Read more »
Read More
Posted in class, college, electricity, MIT, physics | No comments

Monday, 15 April 2013

Harmonic Oscillator from Fields not Potentials

Posted on 12:57 by Unknown
I finally started writing my paper for 8.06 yesterday. Before that, though, I had asked a couple questions about the topic to my UROP supervisor, whose primary area of expertise is actually in QED and Casimir problems. I was asking him why the book The Quantum Vacuum by Peter Milonni uses the magnetic potential $\vec{A}$ instead of the electromagnetic fields $\vec{E}$ and $\vec{B}$ to expand in Fourier modes and derive the harmonic oscillator Hamiltonian. He said that is just a matter of choice, and in fact the same derivation can be done using the fields rather than the potential; moreover, he encouraged me to try this out for myself, and I did just that. Lo and behold, the right answer popped out by modifying the derivation in that book to use the fields and the restrictions of the Maxwell equations instead of the potential and the Coulomb gauge choice; follow the jump to see what it looks like. I'm going to basically write out the derivation in the book and show how at each point I modify it.

Read more »
Read More
Posted in class, college, MIT, physics, qed, quantum electrodynamics, quantum mechanics | No comments

Tuesday, 9 April 2013

Charge Conservation and Legendre Transformations

Posted on 11:20 by Unknown
As a follow-up (sort of, but not exactly) to my previous post on the matter, I would like to post a few updates and new questions, using Einstein summation throughout for convenience. The first has to do with why $p^{\mu} = \int T^{(0, \mu)} d^3 x$ is a Lorentz-contravariant vector. Apparently Noether's theorem says that if some Noether current $J^{\mu}$ generates a symmetry and satisfies \[ \partial_{\mu} J^{\mu} = 0 \] then the quantity \[ q = \int J^{(0)} d^3 x \] called the Noether charge is Lorentz-invariant and conserved. The first is not easy to show, but apparently some E&M textbooks do it for the example of electric charge. The second is fairly easy to show: using the condition that $\int \partial_{j} J^{j} d^3 x = \int_{\partial \mathbb{R}^3} J^{j} d\mathcal{S}_{j} = 0$ (from the divergence theorem applied to all of Euclidean space) in conjunction with $\partial_{\mu} J^{\mu} = 0$, the result $\dot{q} = 0$ follows.

As an example, let us consider the generator of rotations and Lorentz boosts for a general energy distribution: that is the 3-index angular momentum tensor \[ M^{\mu \nu \sigma} = x^{\mu} T^{\nu \sigma} - x^{\nu} T^{\mu \sigma} .\] Given that $\partial_{\nu} T^{\mu \nu} = 0$ then $\partial_{\sigma} M^{\mu \nu \sigma} = (\partial_{\sigma} x^{\mu})T^{\nu \sigma} + x^{\mu} \partial_{\sigma} T^{\nu \sigma} - (\partial_{\sigma} x^{\nu})T^{\mu \sigma} - x^{\nu} \partial_{\sigma} T^{\mu \sigma}$ $= \delta_{\sigma}^{\; \mu} T^{\nu \sigma} - \delta_{\sigma}^{\; \nu} T^{\mu \sigma} = T^{\nu \mu} - T^{\mu \nu} = 0$. Therefore the 3-index angular momentum is a proper Noether current. Its corresponding conserved charge is the 2-index angular momentum integrated over spatial directions: \[ L^{\mu \nu} = \int M^{(\mu \nu, 0)} d^3 x \] (except for perhaps a sign because $M^{\mu \nu \sigma}$ is antisymmetric in its indices) and it should be easy now to show that $\dot{L}^{\mu \nu} = 0$, which is cool. The only remaining question I have is whether it is more correct to say $L^{\mu \nu} = x^{\mu} p^{\nu} - x^{\nu} p^{\mu}$ where $p^{\mu} = \int T^{(\mu, 0)} d^3 x$ as before or if the better definition is the one integrating $M^{(\mu \nu, 0)}$ over space.

Now I have an even bigger question looming ahead of me though. The Noether current generating spacetime translational symmetry is exactly the stress-energy tensor derivable as the Legendre transformation of the Lagrangian. The term involving the conjugate momenta is easy, but the term involving the Lagrangian is confusing. For a scalar field $\phi$ (and for a vector field this is easily replaced with $A^{\sigma}$), what I have seen is \[ T^{\mu \nu} = \frac{\partial \mathcal{L}}{\partial (\partial_{\mu} \phi)} \partial^{\nu} \phi - \mathcal{L} B^{\mu \nu} .\] The problem is that the tensor $B^{\mu \nu}$ seems to depend on either the field $\phi$ used or on the notation consistently used. Sometime $B^{\mu \nu} = \delta^{\mu \nu}$, while other times $B^{\mu \nu} = \eta^{\mu \nu}$. I'm not really sure which it is supposed to be, as sometimes for scalar fields $B = \delta$ is used, while for the electromagnetic field $B = \eta$ is used, and sometimes the notation isn't even that consistent. The issue is that either one would properly specify a Lorentz-contravariant 2-index tensor, but only one of them actually defines the translational symmetry Noether current properly. Which one is it? The issue appears to be akin to the problem of two grammatically correct sentences where one carries meaning and makes sense while the other makes no sense at all (e.g. "colorless green ideas sleep furiously").
Read More
Posted in class, college, MIT, physics, qed, quantum electrodynamics, quantum mechanics | No comments

Monday, 8 April 2013

Long-Term Review: Chakra 2013.02 "Benz"

Posted on 14:19 by Unknown
I did this long-term review on my normal UROP desktop computer with the 64-bit edition of the OS. Follow the jump to see how it fared. Also do note that there are more days logged because I intend to use it for about 60-80 full hours of work, which is the equivalent of 7-10 full days in the summer, though now I am working on a part-time basis as classes have started.

Read more »
Read More
Posted in Chakra, college, KDE, long, MIT, Pardus, physics, Unixoid Review, UROP | No comments

Wednesday, 27 March 2013

Hamiltonian Density and the Stress-Energy Tensor

Posted on 13:44 by Unknown
As an update to a previous post about my adventures in QED-land for 8.06, I emailed my recitation leader about whether my intuition about the meaning of the Fourier components of the electromagnetic potential solving the wave equation (and being quantized to the ladder operators) was correct. He said it basically is correct, although there are a few things that, while I kept in mind at that time, I still need to keep in mind throughout. The first is that the canonical quantization procedure uses the potential $\vec{A}$ as the coordinate-like quantity and finds the conjugate momentum to this field to be proportional to the electric field $\vec{E}$, with the magnetic field nowhere to be found directly in the Hamiltonian. The second is that there is a different harmonic oscillator for each mode, and the number eigenstates do not represent the energy of a given photon but instead represent the number of photons present with an energy corresponding to that mode. Hence, while coherent states do indeed represent points in the phase space of $(\vec{A}, \vec{E})$, the main point is that the photon number can fluctuate, and while classical behavior is recovered for large numbers $n$ of photons as the fluctuations of the number are $\sqrt{n}$ by Poisson statistics, the interesting physics happens for low $n$ eigenstates or superpositions thereof in which $a$ and $a^{\dagger}$ play the same role as in the usual quantum harmonic oscillator. Furthermore, the third issue is that only a particular mode $\vec{k}$ and position $\vec{x}$ can be considered, because the electromagnetic potential has a value for each of those quantities, so unless those are held constant, the picture of phase space $(\vec{A}, \vec{E})$ becomes infinite-dimensional. Related to this, the fourth and fifth issues are, respectively, that $\vec{A}$ is used as the field and $\vec{E}$ as its conjugate momentum rather than using $\vec{E}$ and $\vec{B}$ because the latter two fields are coupled to each other by the Maxwell equations so they form an overcomplete set of degrees of freedom (or something like that), whereas using $\vec{A}$ as the field and finding its conjugate momentum in conjunction with a particular gauge choice (usually the Coulomb gauge $\nabla \cdot \vec{A} = 0$) yields the correct number of degrees of freedom. These explanations seem convincing enough to me, so I will leave those there for the moment.

Another major issue that I brought up with him for which he didn't give me a complete answer was the issue that the conjugate momentum to $\vec{A}$ was being found through \[ \Pi_j = \frac{\partial \mathcal{L}}{\partial (\partial_t A_j)} \] given the Lagrangian density $\mathcal{L} = \frac{1}{8\pi} \left(\vec{E}^2 - \vec{B}^2 \right)$ and the field relations $\vec{E} = -\frac{1}{c}\partial_t \vec{A}$ & $\vec{B} = \nabla \times \vec{A}$. This didn't seem manifestly Lorentz-covariant to me, because in the class 8.033 — Relativity, I had learned that the conjugate momentum to the electromagnetic potential $A^{\mu}$ in the above Lagrangian density would be the 2-index tensor \[ \Pi^{\mu \nu} = \frac{\partial \mathcal{L}}{\partial (\partial_{\mu} A_{\nu})} .\] This would make a difference in finding the Hamiltonian density \[ \mathcal{H} = \sum_{\mu} \Pi^{\mu} \partial_t A_{\mu} - \mathcal{L} = \frac{1}{8\pi} \left(\vec{E}^2 + \vec{B}^2 \right). \] I thought that the Hamiltonian density would need to be a Lorentz-invariant scalar just like the Lagrangian density. As it turns out, this is not the case, because the Hamiltonian density represents the energy which explicitly picks out the temporal direction as special, so time derivatives are OK in finding the momentum conjugate to the potential; because the Lagrangian and Hamiltonian densities looks so similar, it looks like both could be Lorentz-invariant scalar functions, but deceptively, only the former is so. At this point, I figured that because the Hamiltonian and (not field conjugate, but physical) momentum looked so similar, they could arise from the same covariant vector. However, there is no "natural" 1-index vector with which to multiply the Lagrangian density to get some sort of covariant vector generalization of the Hamiltonian density, though there is a 2-index tensor, and that is the metric. I figured here that the Hamiltonian and momentum for the electromagnetic field could be related to the stress-energy tensor, which gives the energy and momentum densities and fluxes. After a while of searching online for answers, I was quite pleased to find my intuition to be essentially spot-on: indeed the conjugate momentum should be a tensor as given above, the Legendre transformation can then be done in a covariant manner, and it does in fact turn out that the result is just the stress-energy tensor \[ T^{\mu \nu} = \sum_{\mu, \xi} \Pi^{\mu \xi} \partial^{\nu} A_{\xi} - \mathcal{L}\eta^{\mu \nu} \] (UPDATE: the index positions have been corrected) for the electromagnetic field. Indeed, the time-time component is exactly the energy/Hamiltonian density $\mathcal{H} = T_{(0, 0)}$, and the Hamiltonian $H = \sum_{\vec{k}} \hbar\omega(\vec{k}) \cdot (\alpha^{\star} (\vec{k}) \alpha(\vec{k}) + \alpha(\vec{k}) \alpha^{\star} (\vec{k})) = \int T_{(0, 0)} d^3 x$. As it turns out, the momentum $\vec{p} = \sum_{\vec{k}} \hbar\vec{k} \cdot (\alpha^{\star} (\vec{k}) \alpha(\vec{k}) + \alpha(\vec{k}) \alpha^{\star} (\vec{k}))$ doesn't look similar just by coincidence: $p_j = \int T_{(0, j)} d^3 x$. The only remaining point of confusion is that it seems like the Hamiltonian and momentum should together form a Lorentz-covariant vector $p_{\mu} = (H, p_j)$, yet if the stress-energy tensor respects Lorentz-covariance, then integrating over the volume element $d^3 x$ won't respect transformations in a Lorentz-covariant manner. I guess because the individual components of the stress-energy tensor transform under a Lorentz boost and the volume element does as well, then maybe the vector $p_{\mu}$ as given above will respect Lorentz-covariance. (UPDATE: another issue I was having but forgot to write before clicking "Publish" was the fact that only the $T_{(0, \nu)}$ components are being considered. I wonder if there is some natural 1-index Lorentz-convariant vector $b_{\nu}$ to contract with $T_{\mu \nu}$ so that the result is a 1-index vector which in a given frame has a temporal component given by the Hamiltonian density and spatial components given by the momentum density.) Overall, I think it is interesting that this particular hang-up was over a point in classical field theory and special relativity and had nothing to do with the quantization of the fields; in any case, I think I have gotten over the major hang-ups about this and can proceed reading through what I need to read for the 8.06 paper.
Read More
Posted in class, college, MIT, physics, qed, quantum electrodynamics, quantum mechanics | No comments

Tuesday, 26 March 2013

Schrödinger and Biot-Savart

Posted on 10:47 by Unknown
There were two things that I would like to post here today. The first is something I have been mulling over for a while. The second is something that I thought about more recently.

Time evolution in nonrelativistic quantum mechanics occurs according to the [time-dependent] Schrödinger equation \[ H|\Psi\rangle = i\hbar \frac{\partial}{\partial t} |\Psi\rangle .\] While this at first may seem intractable, the trick is that typically the Hamiltonian is not time-dependent, so a candidate solution could be $|\Psi\rangle = \phi(t)|E\rangle$. Plugging this back in yields time evolution that occurs through the phase $\phi(t) = e^{-\frac{iEt}{\hbar}}$ applied to energy eigenstates that solve \[ H|E\rangle = E \cdot |E\rangle \] and this equation is often called the "time-independent Schrödinger equation". When I was taking 8.04 — Quantum Physics I, I agreed with my professor who called this a misnomer, in that the Schrödinger equation is supposed to only describe time evolution, so what is being called "time-independent" is more properly just an energy eigenvalue equation. That said, I was thinking that the "time-independent Schrödinger equation" is really just like a Fourier transform of the Schrödinger equation from time to frequency (related to energy by $E = \hbar\omega$), so the former could be an OK nomenclature because it is just a change of basis. However, there are two things to note: the Schrödinger equation is basis-independent, whereas the "time-independent Schrödinger equation" is expressed only in the basis of energy eigenstates, and time is not an observable quantity (i.e. Hermitian operator) but is a parameter, so the change of basis/Fourier transform argument doesn't work in quite the same way that it does for position versus momentum. Hence, I've come to the conclusion that it is better to call the "time-independent Schrödinger equation" as the energy eigenvalue equation.

Switching gears, I was thinking about how the Biot-Savart law is derived. My AP Physics C teacher told me that the Ampère law is derived from the Biot-Savart law. However, this is patently not true, because the Biot-Savart law only works for charges moving at a constant velocity, whereas the Ampère law is true for magnetic fields created by any currents or any changing electric fields. In 8.022 — Physics II, I did see a derivation of the Biot-Savart law from the Ampère law, showing that the latter is indeed more fundamental than the former, but it involved the magnetic potential and a lot more work. I wanted to see if that derivation still made sense to me, but then I realized that because magnetism essentially springs from the combination of electricity and special relativity and because the Biot-Savart law relies on the approximation of the charges moving at a constant velocity, it should be possible to derive the Biot-Savart law from the Coulomb law and special relativity. Indeed, it is possible. Consider a charge $q$ whose electric field is \[ \vec{E} = \frac{q}{r^2} \vec{e}_r \] in its rest frame. Note that the Coulomb law is exact in the rest frame of a charge. Now consider a frame moving with respect to the charge at a velocity $-\vec{v}$, so that observers in the frame see the charge move at a velocity $\vec{v}$. Considering only the component of the magnetic field perpendicular to the relative motion, noting that there is no magnetic field in the rest frame of the charge yields, and considering the low-speed limit (which is the range of validity of the Biot-Savart law) $\left|\frac{\vec{v}}{c}\right| \ll 1$ so that $\gamma \approx 1$ yields $\vec{B} \approx -\frac{\vec{v}}{c} \times \vec{E}$. Plugging in $-\vec{v}$ (the specified velocity of the new frame relative to the charge) for $\vec{v}$ (the general expression for the relative velocity) and plugging in the Coulomb expression for $\vec{E}$ yields the Biot-Savart law \[ \vec{B} = \frac{q\vec{v} \times \vec{e}_r}{cr^2}. \] One thing to be emphasized again is that the Coulomb law is exact in the rest frame of the charge, while the Biot-Savart law is always an approximation because a moving charge will have an electric field that deviates from the Coulomb expression; the fact that the Biot-Savart law is a low-speed inertial approximation is why I feel comfortable doing the derivation this way.
Read More
Posted in AP, class, college, electricity, MIT, physics, quantum electrodynamics, quantum mechanics, school | No comments

Thursday, 21 March 2013

Time and Temperature are Complex

Posted on 13:52 by Unknown
In a post from a few days ago, I briefly mentioned the notion of imaginary time with regard to angular momentum. I'd like to go into that a little further in this post.

In 3 spatial dimensions, the flat (Euclidean) metric is $\eta_{ij} = \delta_{ij}$, which is quite convenient, as lengths are given by $(\Delta s)^2 = (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2$ which is just the usual Pythagorean theorem. When a temporal dimension is added, as in special relativity, the coordinates are now $x^{\mu} = (ct, x_{j})$, and the Euclidean metric becomes the Minkowski metric $\eta_{\mu \nu} = \mathrm{diag}(-1, 1, 1, 1)$ so that $\eta_{tt} = -1$, $\eta_{(t, j)} = 0$, and $\eta_{ij} = \delta_{ij}$. This means that spacetime intervals become $(\Delta s)^2 = -(c\Delta t)^2 + (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2$, which is the normal Pythagorean theorem only if $\Delta t = 0$. In general, time coordinate differences contribute negatively to the spacetime interval. In addition, Lorentz transformations are given by a hyperbolic rotation by a [hyperbolic] angle $\alpha$ equal to the rapidity given by $\frac{v}{c} = \tanh(\alpha)$. This doesn't look quite the same as normal Euclidean geometry. However, a transformation to imaginary time, called a Wick rotation, can be done by setting $\tau = it$, so $x^{\mu} = (ic\tau, x_{j})$, $\eta_{\mu \nu} = \delta_{\mu \nu}$, $(\Delta s)^2 = (c\Delta t)^2 + (\Delta x)^2 + (\Delta y)^2 + (\Delta z)^2$ as in the usual Pythagorean theorem, and the Lorentz transformation is given by a real rotation by an angle $\theta = i\alpha$ (though I may have gotten some of these signs wrong so forgive me) where $\alpha$ is now imaginary. Now, the connection to the component $L_{(0, j)}$ of the angular momentum tensor should be more clear.

I first encountered this in the class 8.033 — Relativity, where I was able to explore this curiosity on a problem set. That question and the accompanying discussion seemed to say that while this is a cool thing to try doing once, it isn't really useful, especially because it does not hold true in general relativity with more general metrics $g_{\mu \nu} \neq \eta_{\mu \nu}$ except in very special cases. However, as it turns out, imaginary time does play a role in quantum mechanics, even without the help of relativity.

Schrödinger time evolution occurs through the unitary transformation $u = e^{-\frac{itH}{\hbar}}$ satisfying $uu^{\dagger} = u^{\dagger} u = 1$. This means that the probability that an initial state $|\psi\rangle$ ends after time $t$ in the same state is given by the amplitude (whose square is the probability [density]) $\mathfrak{p}(t) = \langle\psi|e^{-\frac{itH}{\hbar}}|\psi\rangle$. Meanwhile, assuming the states $|\psi\rangle$ form a complete and orthonormal basis (though I don't know if this assumption is truly necessary), the partition function $Z = \mathrm{trace}\left(e^{-\frac{H}{k_B T}}\right)$, which can be expanded in the basis $|\psi\rangle$ as $Z = \sum_{\psi} \langle\psi|e^{-\frac{H}{k_B T}}|\psi\rangle$. This, however, is just as well rewritten as $Z = \sum_{\psi} \mathfrak{p}\left(t = -\frac{i\hbar}{k_B T}\right)$. Hence, quantum and statistical mechanical information can be gotten from the same amplitudes using the substitution $t = -\frac{i\hbar}{k_B T}$, which essentially calls temperature a reciprocal imaginary time. This is not really meant to show anything more deep or profound about the connection between time and temperature; it is really more of a trick stemming from the fact that the same Hamiltonian can be used to solve problem in quantum mechanics or equilibrium statistical mechanics.

As an aside, it turns out that temperature, even when measured in an absolute scale, can be negative. There are plenty of papers of this online, but suffice it to say that this comes from a more general statistical definition of temperature. Rather than defining it (as it commonly is) as the average kinetic energy of particles, it is better to define it as a measure of the probability distribution that a particle will have a given energy. Usually, particles tend to be in lower energy states more than in higher energy states, and as a consequence, the temperature is positive. However, it is possible (and has been done repeatedly) under certain circumstances to cleverly force the system in a way that causes particles to be in higher energy states with higher probability than in lower energy states, and this is exactly the negative temperature. More formally, $\frac{1}{T} = \frac{\partial S}{\partial E}$ where $E$ is the energy and $S$ is the entropy of the system, which is a measure of how many different states the system can possibly have for a given energy. For positive temperature, if two objects of different temperatures are brought into contact, energy will flow from the hotter one to the colder, cooling the former and heating the latter until equal temperatures are achieved. For negative temperature, though, if an object with negative temperature is brought in contact with an object that has positive temperature, each object tends to increase its own entropy. Like most normal objects, the latter does this by absorbing energy, but by the definition of temperature, the former does this by releasing energy, meaning the former will spontaneously heat the latter. Hence, negative temperature is hotter than positive temperature; this is a quirk of the definition of reciprocal temperature, so really what is happening is that absolute zero on the positive side is still the coldest possible temperature, absolute zero on the negative side is now the hottest temperature, and $\pm \infty$ is in the middle.

This was really just me writing down stuff that I had been thinking about a couple of months ago. I hope this helps someone, and I also await the day when TV newscasters say "complex time brought to you by..." instead of "time and temperature brought to you by...".
Read More
Posted in class, college, mathematics, MIT, physics, qed, quantum electrodynamics, quantum mechanics, statistical mechanics | No comments

Wednesday, 20 March 2013

Nonzero Electromagnetic Fields in a Cavity

Posted on 08:03 by Unknown
The class 8.06 — Quantum Physics III requires a final paper, written essentially like a review article of a certain area of physics that uses quantum mechanics and that is written for the level of 8.06 (and not much higher). At the same time, I have also been looking into other possible UROP projects because while I am quite happy with my photonic crystals UROP and would be pleased to continue with it, that project is the only one I have done at MIT thus far, and I would like to try at least one more thing before I graduate. My advisor suggested that I not do something already done to death like the Feynman path integrals in the 8.06 paper but instead to do something that could act as a springboard in my UROP search. One of the UROP projects I have been investigating has to do with Casimir forces, but I pretty much don't know anything about that, QED, or [more generally] QFT. Given that other students have successfully written 8.06 papers about Casimir forces, I figured this would be the perfect way to teach myself what I might need to know to be able to start on a UROP project in that area. Most helpful thus far has been my recitation leader, who is a graduate student working in the same group that I have been looking into for UROP projects; he has been able to show me some of the basic tools in Casimir physics and point me in the right direction for more information. Finally, note that there will probably be more posts about this in the near future, as I'll be using this to jot down my thoughts and make them more coherent (no pun intended) for future reference.

Anyway, I've been able to read some more papers on the subject, including Casimir's original paper on it as well as Lifshitz's paper going a little further with it. One of the things that confused me in those papers (and in my recitation leader's explanation, which was basically the same thing) was the following. The explanation ends with the notion that quantum electrodynamic fluctuations in a space with a given dielectric constant, say in a vacuum surrounded by two metal plates, will cause those metal plates to attract or repel in a manner dependent on their separation. This depends on the separation being comparable to the wavelength of the electromagnetic field (or something like that), because at much larger distances, the power of normal blackbody radiation (which ironically still requires quantum mechanics to be explained) does not depend on the separation of the two objects, nor does it really depend on their geometries, but only on their temperatures. The explanation of the Casimir effect starts with the notion of an electromagnetic field confined between two infinite perfectly conducting parallel plates, so the fields form standing waves like the wavefunctions of a quantum particle in an infinite square well. This is all fine and dandy...except that this presumes that there is an electromagnetic field. This confused me: why should one assume the existence of an electromagnetic field, and why couldn't it be possible to assume that there really is no field between the plates?

Then I remembered what the deal is with quantization of the electromagnetic field and photon states from 8.05 — Quantum Physics II. The derivation from that class still seems quite fascinating to me, so I'm going to repost it here. You don't need to know QED or QFT, but you do need to be familiar with Dirac notation and at least a little comfortable with the quantization of the simple harmonic oscillator.

Let us first get the classical picture straight. Consider an electromagnetic field inside a cavity of volume $\mathcal{V}$. Let us only consider the lowest-energy mode, which is when $k_x = k_y = 0$ so only $k_z > 0$, stemming from the appropriate application of boundary conditions. The energy density of the system can be given as \[H = \frac{1}{8\pi} \left(\vec{E}^2 + \vec{B}^2 \right)\] and the fields that solve the dynamic Maxwell equations \[\nabla \times \vec{E} = -\frac{1}{c} \frac{\partial \vec{B}}{\partial t}\] \[\nabla \times \vec{B} = \frac{1}{c} \frac{\partial \vec{E}}{\partial t}\] as well as the source-free Maxwell equations \[\nabla \cdot \vec{E} = \nabla \cdot \vec{B} = 0\] can be written as \[\vec{E} = \sqrt{\frac{8\pi}{\mathcal{V}}} \omega Q(t) \sin(kz) \vec{e}_x\] \[\vec{B} = \sqrt{\frac{8\pi}{\mathcal{V}}} P(t) \cos(kz) \vec{e}_y\] where $\vec{k} = k_z \vec{e}_z = k\vec{e}_z$ and $\omega = c|\vec{k}|$. The prefactor comes from normalization, the spatial dependence and direction come from boundary conditions, and the time dependence is somewhat arbitrary. I think this is because the spatial conditions are unaffected by time dependence if they are separable, and the Maxwell equations are linear so if a periodic function like a sinusoid or complex exponential in time satisfies Maxwell time evolution, so does any arbitrary superposition (Fourier series) thereof. That said, I'm not entirely sure about that point. Also note that $P$ and $Q$ are not entirely arbitrary, because they are restricted by the Maxwell equations. Plugging the fields into those equations yields conditions on $P$ and $Q$ given by \[\dot{Q} = P\] \[\dot{P} = -\omega^2 Q\] which looks suspiciously like simple harmonic motion. Indeed, plugging these electromagnetic field components into the Hamiltonian [density] yields \[H = \frac{1}{2} \left(P^2 + \omega^2 Q^2 \right)\] which is the equation for a simple harmonic oscillator with $m = 1$; this is because the electromagnetic field has no mass, so there is no characteristic mass term to stick into the equation. Note that these quantities have a canonical Poisson bracket $\{Q, P\} = 1$, so $Q$ can be identified as a position and $P$ can be identified as a momentum, though they are actually neither of those things but are simply mathematical conveniences to simplify expressions involving the fields; this will become useful shortly.

Quantizing this yields turns the canonical Poisson bracket relation into the canonical commutation relation $[Q, P] = i\hbar$. This also implies that $[E_a, B_b] \neq 0$, which is huge: this means that states of the photon cannot have definite values for both the electric and magnetic fields simultaneously, just as a quantum mechanical particle state cannot have both a definite position and momentum. Now the fields themselves are operators that depend on space and time as parameters, while the states are now vectors in a Hilbert space defined for a given mode $\vec{k}$, which has been chosen in this case as $\vec{k} = k\vec{e}_z$ for some allowed value of $k$. The raising and lowering operators $a$ and $a^{\dagger}$ can be defined in the usual way but with the substitutions $m \rightarrow 1$, $x \rightarrow Q$, and $p \rightarrow P$. The Hamiltonian then becomes $H = \hbar\omega \cdot \left(a^{\dagger} a + \frac{1}{2} \right)$, where again $\omega = c|\vec{k}|$ for the given mode $\vec{k}$. This means that eigenstates of the Hamiltonian are the usual $|n\rangle$, where $n$ specifies the number of photons which have mode $\vec{k}$ and therefore frequency $\omega$; this is in contrast to the single particle harmonic oscillator eigenstate $|n\rangle$ which specifies that there is only one particle and it has energy $E_n = \hbar \omega \cdot \left(n + \frac{1}{2} \right)$. This makes sense on two counts: for one, photons are bosons, so multiple photons should be able to occupy the same mode, and for another, each photon carries energy $\hbar\omega$, so adding a photon to a mode should increase the energy of the system by a unit of the energy of that mode, and indeed it does. Also note that these number eigenstates are not eigenstates of either the electric or the magnetic fields, just as normal particle harmonic oscillator eigenstates are not eigenstates of either position or momentum. (As an aside, the reason why lasers are called coherent is because they are composed of light in coherent states of a given mode satisfying $a|\alpha\rangle = \alpha \cdot |\alpha\rangle$ where $\alpha \in \mathbb{C}$. These, as opposed to energy/number eigenstates, are physically realizable.)

So what does this have to do with quantum fluctuations in a cavity? Well, if you notice, just as with the usual quantum harmonic oscillator, this Hamiltonian has a ground state energy above the minimum of the potential given by $\frac{1}{2} \hbar\omega$ for a given mode; this corresponds to having no photons in that mode. Hence, even an electrodynamic vacuum has a nonzero ground state energy. Equally important is the fact that while the mean fields $\langle 0|\vec{E}|0\rangle = \langle 0|\vec{B}|0\rangle = \vec{0}$, the field fluctuations $\langle 0|\vec{E}^2|0\rangle \neq 0$ and $\langle 0|\vec{B}^2|0 \rangle \neq 0$; thus, the electromagnetic fields fluctuate with some nonzero variance even in the absence of photons. This relieves the confusion I was having earlier about why any analysis of the Casimir effect assumes the presence of an electromagnetic field in a cavity by way of nonzero fluctuations even when no photons are present. Just to tie up the loose ends, because the Casimir effect is introduced as having the electromagnetic field in a cavity, the allowed modes are standing waves with wavevectors given by $\vec{k} = k_x \vec{e}_x + k_y \vec{e}_y + \frac{\pi n_z}{l} \vec{e}_z$ where $n_z \in \mathbb{Z}$, assuming that the cavity bounds the fields along $\vec{e}_z$ but the other directions are left unspecified. This means that each different value of $\vec{k}$ specifies a different harmonic oscillator, and each of those different harmonic oscillators is in the ground state in the absence of photons. You'll be hearing more about this in the near future, but for now, thinking through this helped me clear up my basic misunderstandings, and I hope anyone else who was having the same misunderstandings feels more comfortable with this now.
Read More
Posted in class, college, electricity, MIT, physics, qed, quantum electrodynamics, quantum mechanics, statistical mechanics, UROP | No comments

Monday, 18 March 2013

A Less-Seen View of Angular Momentum

Posted on 07:16 by Unknown
Many people learn in basic physics classes that angular momentum is a scalar quantity that describes the magnitude and direction of rotation, such that its rate of change is equal to the sum of all torques $\tau = \dot{L}$, akin to Newton's equation of motion $\vec{F} = \dot{\vec{p}}$. People who take more advanced physics classes, such as 8.012 — Physics I, learn that in fact angular momentum and torque are vectors; in the case of fixed-axis rotation, the moment of inertia (the rotational equivalent to mass) is a scalar so $\vec{L} = I\vec{\omega}$ means that angular momentum points in the same direction as angular velocity. By contrast, in general rigid body motion, the moment of inertia becomes anisotropic and becomes a tensor, so \[\vec{L} = \stackrel{\leftrightarrow}{I} \cdot \vec{\omega}\] implies that angular momentum is no longer parallel to angular velocity, but instead the components are related (using Einstein summation for convenience) by \[L_i = I_{ij} \omega_{j}.\] This becomes important in the analysis of situations like gyroscopes and torque-induced precession, torque-free precession, and nutation.

There is one problem though: there is nothing particularly vector-like about angular momentum. It is constructed as a vector essentially for mathematical convenience. The definition $\vec{L} = \vec{x} \times \vec{p}$ only works in 3 dimensions. Why is this? Let's look at the definition of the cross product components: in 3 dimensions, the permutation tensor has 3 indices, so contracting it with 2 vectors produces a third vector $\vec{c} = \vec{a} \times \vec{b}$ such that $c_i = \varepsilon_{ijk} a_{j} b_{k}$. One trick that is commonly taught to make the cross product easier is to turn the first vector into a matrix and then perform matrix multiplication with the column representation of the second vector to get the column representation of the resulting vector: the details of this rule are hard to remember, but the source is simple, as it is just $a_{ij} = \varepsilon_{ijk} a_{k}$. Now let us see what happens to angular velocity and angular momentum using this definition. Angular velocity was previously defined as a vector through $\vec{v} = \vec{\omega} \times \vec{x}$. We know that $\vec{x}$ and $\vec{v}$ are true vectors, while $\vec{\omega}$ is a pseudovector (defined by it flipping direction when the coordinate system undergoes reflection), so $\vec{\omega}$ is vector to be made into a tensor. Using the previous definition that in 3 dimensions $\omega_{ij} = \varepsilon_{ijk} \omega_{k}$, then \[v_i = \omega_{ij} x_{j}\] now defines the angular velocity tensor. Similarly, angular momentum is a pseudovector, so it can be made into a tensor through $L_{ij} = \varepsilon_{ijk} L_{k}$. Substituting this into the equation relating angular momenta and angular velocities yields \[L_{ij} = I_{ik} \omega_{kj}\] meaning the matrix representation of the angular momentum tensor is now the matrix multiplication of the matrices representing the moment of inertia and angular velocity tensors.

This has another consequence: the meaning of the components of the angular velocity and angular momentum become much more clear. Previously, $L_{j}$ was the generator of rotation in the plane perpendicular to the $j$-axis, and $\omega_{j}$ described the rate of this rotation: for instance, $L_z$ and $\omega_z$ relate to rotation in the $xy$-plane. This is somewhat counterintuitive. On the other hand, the tensor definitions $L_{ij}$ and $\omega_{ij}$ deal with rotations in the $ij$-plane: for example, $L_{xy}$ generates and $\omega_{xy}$ describes rotations in the $xy$-plane, which seem much more intuitive. Also, with this, $L_{ij} = x_{i} p_{j} - p_{i} x_{j}$ becomes a definition (though there may be a numerical coefficient that I am missing, so forgive me).

The nice thing about this formulation of angular velocities and momenta as tensor quantities is that this is generalizable to 4 dimensions, be it 4 spatial dimensions or 3 spatial and 1 temporal dimension (as in relativity). $L_{\mu \nu} = x_{\mu} p_{\nu} - p_{\mu} x_{\nu}$ now defines the generator of rotation in the $\mu\nu$-plane. Similarly, $\omega_{\mu \nu}$ defined in $L_{\mu \nu} = I_{\mu}^{\; \xi} \omega^{\xi}_{\; \nu}$ describes the rate of rotation in that plane. The reason why these cannot be vectors any more is that the permutation tensor gains an additional index, so contracting it with two vectors yields a tensor with 2 indices; this means that the cross product as laid out in 3 dimensions does not work in any other number of dimensions (except, interestingly enough, for 7, and that is because a 7-dimensional Cartesian vector space can be described through the algebra of octonions which does have a cross product, just as 2-dimensional vectors can be described by complex numbers and 3-dimensional vectors can be described by quaternions).

This has further nice consequence for special relativity. The Lorentz transformation as given in $x^{\mu'} = \Lambda^{\mu'}_{\; \mu} x^{\mu}$ is a hyperbolic rotation through an angle $\alpha$, equal to the rapidity defined as $\alpha = \tanh(\beta)$. A hyperbolic rotation is basically just a normal rotation through an imaginary angle. This can actually be seen by transforming to coordinates with imaginary time (called a Wick rotation, which may come back up in a post in the near future): $x^{\mu} = (ct, x^{j}) \rightarrow (ict, x^{j})$, allowing the metric to change as $\eta_{\mu \nu} = \mathrm{diag}(-1, 1, 1, 1) \rightarrow \delta_{\mu \nu}$. This changes the rapidity to just be a real angle, and the Lorentz transformation becomes a real rotation. Because only the temporal coordinate has been made imaginary while the spatial coordinates have been left untouched, because the Lorentz transformation is now a real rotation, and because angular momentum generates real rotations, then it can be said that the angular momentum components $L_{(0, j)}$ generate Lorentz boosts along the $j$-axis. This fact remains true even if the temporal coordinate is not made imaginary and the metric remains with an opposite sign for the temporal component, though the math of Lorentz boost generation becomes a little more tricky. That said, typically the conservation of angular momentum implies symmetry of the system under rotation, thanks to the Noether theorem. Naïvely, this would imply that conservation of $L_{(0, j)}$ is associated with symmetry under the Lorentz transformation. The truth is a little more complicated (but not by too much), as my advisor and I found from a few Internet searches. Basically, in nonrelativistic mechanics, just as momentum is the generator of spatial translation, position is the generator of (Galilean) momentum boosting: this can be seen in the quantum mechanical representation of momentum in the position basis $\hat{p} = -i\hbar \frac{\partial}{\partial x}$, and the analogous representation of position in the momentum basis $\hat{x} = i\hbar \frac{\partial}{\partial p}$. If the system is invariant under translation, then the momentum is conserved and the system is inertial, whereas if the system is invariant under boosting, then the position is conserved and the system is fixed at a given point in space. In relativity, the analogue to a Galilean momentum boost is exactly the Lorentz transformation, so conservation of $L_{(0, j)}$ corresponds to the system being fixed at its initial spacetime coordinate; this is OK even in relativity because spacetime coordinates are invariant geometric objects, even if their components transform covariantly.

There are a few remaining issues with this analysis. One is that rotations in 3 dimensions are just sums of pairs of rotations in planes, and rotations in 4 dimensions are just sums of pairs of rotations in 3 dimensions. This relates in some way (that I am not really sure of) to symmetries under special orthogonal/unitary transformations in those dimensions. In dimensions higher than 4, things get a lot more hairy, and I'm not sure if any of this continues to hold. Also, one remaining issue is that in special relativity, because the speed of light is fixed and finite, rigid bodies cease to exist except as an approximation, so the description of such dynamics using a moment of inertia tensor generalized to special relativity may not work anymore (though the description of angular momentum as a tensor should still work anyway). Finally, note that the generalization of particle momentum $p_{\mu}$ to a distribution of energy lies in the stress-energy tensor $T_{\mu \nu}$, so the angular momentum of such a distribution becomes a tensor with 3 indices that looks something like (though maybe not exactly like) $L_{\mu \nu \xi} = x_{\mu} T_{\nu \xi} - x_{\nu} T_{\mu \xi}$. In addition, stress-energy tensors with relativistic angular momenta may change the metric itself, so that would need to be accounted for through the Einstein field equations. Anyway, I just wanted to further explore the formulations and generalizations of angular momentum, and I hope this helped in that regard.
Read More
Posted in class, college, mathematics, MIT, physics | No comments

Saturday, 16 March 2013

Frictions, Subsidies, and Taxes

Posted on 14:54 by Unknown
One of the things I learned in my high school AP Microeconomics class was that a tax causes the supply curve to shift to the left, making the equilibrium quantity decrease and price increase. Consumer and producer surplus both decrease, but while government revenue can account for some of the loss in total welfare, some part of total welfare gets fully lost, and this is what is known as deadweight loss. I didn't have a very good intuition for how this worked at the time (though I was able to get through it on homework, quizzes, tests, and the AP exam). At the same time, though, I thought that a tax should be fully reversible by having the government subsidize producers, and that as this would be the opposite of a tax, supply would shift to the right, the equilibrium quantity would rise and price would fall, and there would be a welfare gain.

Then, when I took 14.01 — Introduction to Microeconomics, we again discussed the situation with a tax. Then we talked about subsidies, but I was confused because the mechanism seemed to be in providing a subsidy to consumers rather than to producers. My intuition at that point was that taxes were creating deadweight loss because producers who wanted to produce and consumers who wanted to consume near the original equilibrium could not do so after the tax, so some transactions were essentially being prohibited. However, I still didn't quite understand why a subsidy would create deadweight loss, because it seemed to me like consumers who wanted to consume more and producers who wanted to produce more than the original equilibrium quantity could now do so, meaning it seemed to me like more transactions were being made possible. That said, I did understand why the government would never subsidize producers: unless the market is perfectly competitive, producers would rather collude and pocket their subsidies while keeping prices high when they can. On the other hand, consumers prefer consuming, so subsidizing consumers is a more surefire way of increasing the equilibrium quantity, even though the price would go up rather than down.

(In 14.04 — Intermediate Microeconomic Theory, we barely touched on deadweight loss in the way that it is covered in more traditional microeconomics classes.) Now, in 14.03 — Microeconomic Theory and Public Policy, I think I better understand the intuition behind deadweight losses stemming from taxes and subsidies, and why a subsidy is not the opposite of a tax. In a tax, the government might try to target some new equilibrium quantity below the original one, so the tax revenue collected, which increases total welfare, is the difference between the willingness of consumers to pay and the willingness of producers to accept at that quantity multiplied by that quantity. Consumer and producer surplus both decrease, and the tax revenue contribution to the increase in total welfare is not enough to offset these two, so there is an overall deadweight loss. A completely isomorphic way of picturing this is by considering the tax falling on consumers so that the demand shifts to the left; in both cases, the equilibrium quantity drops, the government collects its revenue, surpluses drop, so deadweight losses appear.
Meanwhile, for a subsidy, the government might target a higher quantity than the original equilibrium. The spending on that subsidy is the difference between the willingness of producers to accept and the willingness of consumers to pay at that quantity multiplied by that quantity. Consumer and producer surpluses both increase, but together they do not increase enough to offset government spending which is an overall drain on total welfare, so there exists a deadweight loss.

It's interesting that taxes and subsidies are not opposites. The intuition is that for a tax, the revenue is not enough to compensate for the welfare losses of consumers and producers because the new equilibrium quantity is lower. By contrast, for a subsidy, the spending is too high compared to the welfare gains of consumers and producers because the new equilibrium quantity is higher. It looks like it is not possible to spend money given by tax revenue to undo the effects of a tax; instead, the government can only overshoot and overspend. It reminds me very much of how friction works: moving in one direction on a surface with friction causes energy loss, while turning around to move in the other direction on that same surface most certainly does not cause energy gain. Essentially, in this model, the market is frictionless, and the government introduces friction.

Of course, this essentially contradicts Keynesian models of government taxation and spending and their respective effects. That's why care must be taken when putting microeconomic models in a macroeconomic perspective. This also doesn't consider externalities, less than perfectly competitive market structures, et cetera. Anyway, I hope my musings on this may help give other people some intuition on simple issues of deadweight loss in microeconomic theory.
Read More
Posted in class, college, economics, government intervention, MIT, semester, subsidy, tax | No comments

Wednesday, 6 March 2013

More on 2012 Fall

Posted on 06:00 by Unknown
Last semester, I was taking 8.05, 8.13, 8.231, and 14.04, along with continuing my UROP. I was busy and stressed basically all the time. Now I think I know why: it turns out that the classes I was taking were much closer to graduate classes in material, yet they came with all the trappings of an undergraduate class, like exams (that were not intentionally easy). Let me explain a little more.

8.05 — Quantum Physics II is where the linear algebra formalism and bra-ket notation of quantum mechanics are introduced and thoroughly investigated. Topics of the class include analysis of wavefunctions in 1-dimensional potentials, vectors in Hilbert spaces, matrix representations of operators, 2-state systems, applications to spin, NMR, continuous Hilbert spaces (e.g. position), the harmonic oscillator, coherent & squeezed states as well as the representation of photon states and the electromagnetic field operators forming a harmonic oscillator, angular momentum, addition of angular momenta, and Clebsch-Gordan coefficients. OK, so considering that most of these things are expected knowledge for the GRE in physics, this is probably more like a standard undergraduate quantum mechanics curriculum rather than a graduate-level curriculum. That said, apparently this perfectly substitutes for the graduate-level quantum theory class, because I know of a lot of people who go right from 8.05 to the graduate relativistic quantum field theory class.

8.13 — Experimental Physics I is generally a standard undergraduate physics laboratory class (although it is considered standard in the sense that its innovations have spread far and wide). The care and detail in performing experiments, analyzing data, making presentations, and writing papers seem like fairly obvious previews of graduate life as an experimental physicist.

8.231 — Physics of Solids I might be the first class on this list that actually could be considered a graduate-level class for undergraduates, also because the TAs for that class have said that it is basically a perfect substitute for the graduate class 8.511 — Theory of Solids I, allowing people who did well in 8.231 to take the graduate class 8.512 & mdash; Theory of Solids II immediately after that. 8.231 emphasized that it is not a survey course but intends to go deep into the physics of solids. I would say that it in fact did both: it was both fairly broad and incredibly deep. Even though the only prerequisite is 8.044 — Statistical Physics I with the corequisite being 8.05, 8.231 really requires intimate familiarity with the material of 8.06 — Quantum Physics III, which is what I am taking this semester. 8.06 introduces in fairly simple terms things like the free electron gas (which is also a review from 8.044), the tight-binding model, electrons in an electromagnetic field, the de Haas-van Alphen effect, and the integer quantum Hall effect, and it will probably talk about perturbation theory and the nearly-free electron gas. 8.231 requires a good level of comfort with these topics, as it goes into much more depth with all of these, as well as the basic descriptions of crystals and lattices, reciprocal space and diffraction, intermolecular forces, phonons, band theory, semiconductor theory and doping, a little bit of the fractional quantum Hall effect (which is much more complicated than its integer counterpart), a little bit of topological insulator theory, and a little demonstration on superfluidity and superconductivity.

14.04 &; Intermediate Microeconomic Theory is the other class I can confidently say is much closer to a graduate class than an undergraduate class, because I talked to the professor yesterday and he said exactly this. He said that typical undergraduate intermediate microeconomic theory classes are more like 14.03 &; Microeconomic Theory and Public Policy (which I am taking now), where the constrained optimization problems are fairly mechanical, and there may be discussion on the side of applications to real-world problems. By contrast, 14.04 last semester focused on the fundamentals of abstract choice theory with a lot more elegant mathematical formalism, the application of those first principles to derive all of consumer and producer choice theories, partial and general equilibrium, risky choice theory, subjective risky choice theory and its connections to Arrow-Debreu securities and general equilibrium, oligopoly and game theory, asymmetric information, and other welfare problems. The professor was saying that by contrast to a typical such class elsewhere, 14.04 here is much closer to a graduate microeconomic theory/decision theory class, and the professor wanted to achieve that level of abstract conceptualization while not going too far for an undergraduate audience.

At this point, I'm hoping that the experiences from last semester pay off this semester. It looks like that has been working so far!
Read More
Posted in 2012, class, college, economics, MIT, physics, semester | No comments

Friday, 1 March 2013

More on My Photonic Crystal UROP

Posted on 08:32 by Unknown
In my post at the end of the summer, I talked a bit about what I actually did in that UROP. Upon rereading it, I have come to realize that it is a little jumbled and technical. I'd like to basically rephrase it in less technical terms, along with providing more context on what I did in the 2011 fall semester. Follow the jump to see more.


Read more »
Read More
Posted in college, electricity, frequency, MIT, photonic, semester, thermophotovoltaic, UROP | No comments

Thursday, 21 February 2013

A Less-Seen View of Complex Numbers

Posted on 19:30 by Unknown
This post is a little different from the last one, only because it's more about mathematics than physics. It's based on thoughts I have been having about complex numbers and how they relate to 2-dimensional vectors. Follow the jump to see more.
Read more »
Read More
Posted in class, college, mathematics, MIT, physics | No comments

Tuesday, 19 February 2013

An Ode to Johnson Noise

Posted on 07:42 by Unknown
This idea for a post has been percolating for a while. I feel like now I am finally ready to share it.
Last semester, in the class 8.13 — Experimental Physics I, one of the experiments that I did was investigating the phenomena of Johnson noise and shot noise and using thes to find, respectively, the Boltzmann constant and electron charge. The other experiments that I did were investigating properties of hydrogen-like atoms through spectroscopy, and determining the speed and decay times of cosmic-ray muons. By far the best and worst experience was with Johnson noise. Follow the jump to read on.

Read more »
Read More
Posted in class, college, excitement, learning experience, MIT, physics, statistical mechanics | No comments

Tuesday, 5 February 2013

Sixth Semester at College

Posted on 05:33 by Unknown
Today is the first day of my sixth semester at college. I'll be taking 8.06 — Quantum Physics III, 8.14 — Experimental Physics II (also known colloquially as "J-Lab"), and 14.03 — Microeconomic Theory & Public Policy. I feel like taking three classes apart from J-Lab was too much last semester, so taking two this semester should make my schedule feel a lot more sane and manageable. Plus, now I have a handle on what J-Lab expects, so striving to meet that should be easier now. Finally, I did this because my UROP sort of fell to the wayside last semester, and I don't want that to happen this semester, so now I should be able to spend more time on both classes and that. Hopefully this semester will be a good one; good luck to all my fellow classmates out there!
Read More
Posted in class, college, MIT, semester, UROP | No comments

Saturday, 2 February 2013

Reflection: 2013 IAP

Posted on 18:42 by Unknown
This IAP was a ton of fun, and I was also able to get quite a bit done. The highlight was in my UROP, which is a continuation of what I was doing in the past semester. In late November, there was a power outage that reintroduced a previously-fixed bug into the computing cluster that I used, which caused various issues for my photonic crystal calculations. After trying several different workarounds, it wasn't until two weeks ago that I figured out how to properly route around the issue; when that happened, I was quite happy to have sensible, working flux spectrum calculations. Some people from the main UROP office also came to chat with me about what I am doing for my UROP, which was cool. At the same time, I was able to start wrapping up my project from the 2011 fall semester by creating more proper figures for the work I did then.
On the side, I worked on a video for the MIT-K12 initiative, which is a partnership between MIT and the Khan Academy. I was able to create a video about friction aimed at middle school students, and I'm fairly pleased with how it turned out. It should become official in a few weeks, at which point I will have either an update to this post or a separate follow-up post to include links to that and the result of the UROP chat (whenever that gets finalized).
I was also able to start typesetting lecture notes for 8.04 — Quantum Physics I for use by MIT OCW. I'm working with two other friends on that as well to get it done more efficiently.
The last week of IAP was particularly hectic. Along with the UROP chat, I was able to participate at the Diversity Summit as part of a panel of students with disabilities. Also, I helped to organize the SPS and UWIP joint Physics Lightning Lectures event.
There were a few things I was not able to do, but those can be done later, and I am glad that I gave myself enough time to rest and take life at a more relaxed pace compared to that of the semester. That said, this blog will likely reenter a sort of hibernation once the semester starts next Tuesday (and another post on that will likely occur on Monday or Tuesday of this coming week). Anyway, for this weekend, I am just going to relax and enjoy the large televised football game tomorrow!
Read More
Posted in college, disability, education, middle, MIT, physics, Reflection, school, UROP | No comments
Older Posts Home
Subscribe to: Posts (Atom)

Popular Posts

  • Long-Term Review: openSUSE 12.2 KDE
    I did this long-term review on my normal UROP desktop computer with the 64-bit edition of the OS. Follow the jump to see how it fared. Also ...
  • SourceForge, Pages, and Respins
    I may have mentioned this in a previous post, but I have added new static pages to this blog. I wanted to mention this again as I will proba...
  • How-To: Make Xfce Like Unity
    This is more or less the sequel to this post. It came about because I wanted to see if it would be easy to make Xfce look like Apple's ...
  • Review: Fedora 18 "Spherical Cow" GNOME
    Although I have reviewed a number of Fedora remixes, I haven't reviewed proper Fedora since the very first review/comparison test I post...
  • Review: KDE 4.6
    A couple days ago, KDE 4.6 was released for the world to enjoy. It boasts myriad bug fixes, new features for applications like Dolphin and M...
  • A Disappointing Review of #! 10 "Statler"
    Before I say anything else, I'd just like to say that the reason why I haven't posted anything in 2 weeks has been due to me being q...
  • Review: Linux Mint 14.1 "Nadia" MATE + GNOME 3/Cinnamon
    Wow. It's been a really long time since I've had the time to sit down and do a review like this. The reason for that is because this...
  • Review: Trisquel 4.0.1 LTS "Taranis"
    Main Screen + Main Menu I've read a couple of reviews of Trisquel GNU/Linux, an Ubuntu-based distribution which aims to remove as much n...
  • Review: Slackware 13.1
    KDE Main Screen I never envisioned myself trying out any of the more advanced distributions like Slackware, Arch, or Gentoo, but having trie...
  • Review: Linux Mint 11 "Katya" GNOME
    Main Screen Linux Mint is currently my favorite Linux distribution of all and is the one I use almost exclusively on a regular basis. Since ...

Categories

  • 11
  • 13
  • 1st birthday
  • 200th post
  • 2010
  • 2011
  • 2012
  • 2D
  • 3 Idiots
  • 3D
  • 4
  • 600-series
  • 600C
  • 670C
  • 7
  • 7z
  • 8 glasses every day
  • A Short History of Nearly Everything
  • Abiword
  • abuse of copyright
  • Acer
  • ACTA
  • Activities
  • Adafruit
  • admission
  • Adobe
  • Adobe Flash
  • advertisement
  • Afghanistan
  • agricultural company
  • airport security
  • Albert-Laszlo Barabasi
  • amarok
  • amateur
  • amazon
  • Amy Chua
  • anaconda
  • android
  • AP
  • apology
  • apple
  • applications
  • April fools
  • aptosid
  • Arch
  • ArchBang
  • Arizona
  • asana
  • asthma
  • asus
  • Athena
  • ati
  • ATT
  • AUSTRUMI
  • autofailblog
  • autonomy
  • avatar
  • ayurveda
  • bad experience
  • ban
  • basmati rice
  • Ben Kevan
  • bias
  • Big Bang
  • big brother
  • Bill Bryson
  • biography
  • birthday
  • blackbox
  • blind
  • blog
  • blog catalog
  • Blogger
  • Blogilo
  • Blogspot
  • BMW
  • Bodhi Linux
  • bombing in russia
  • Book Review
  • bootloader
  • boson
  • brand name
  • break
  • breakfast cereal
  • Bridge Linux
  • British Chiropractic Association
  • broadcast
  • browser
  • BSD
  • Burj Khalifa
  • Bursts
  • bus
  • cable
  • calculus
  • cambridge
  • canonical
  • capitalism
  • care
  • Carolus Linnaeus
  • cell
  • cell phone
  • CentOS
  • central planning
  • CGS
  • Chak De India
  • Chakra
  • Chamber of Commerce
  • chat
  • cheese webcam booth
  • chemistry
  • chicken
  • chicken tax
  • china
  • choice
  • choqok
  • Chrome OS
  • Chromium
  • chrysler
  • Cinnamon
  • Cinnarch
  • City ID
  • class
  • codecs
  • coffee
  • college
  • commodore
  • Commonwealth Games
  • comparison
  • compatibility
  • competition
  • compositing
  • conference
  • congress
  • copyright
  • copyright infringement
  • corruption
  • counterfeiting
  • courts
  • Creative Commons
  • crunchbang linux
  • cryptography
  • crystal
  • CSS
  • CTKArchLive
  • custom linux spin
  • CwF + RtB
  • Daniel Craig
  • Das U-Blog by Prashanth
  • DEB
  • debian
  • debt
  • Dedoimedo
  • deficit
  • democrat
  • denial
  • Department of Justice
  • derivative
  • desktop effects
  • Die Another Day
  • disability
  • disappointment
  • disney
  • distribution
  • DMCA
  • DNA
  • dolphin
  • donation
  • dormitory
  • dream
  • DreamWorks
  • driver
  • DRM
  • Dubai
  • dvd
  • earthquake
  • Ease
  • ebook
  • economics
  • Edmunds
  • Edubuntu
  • education
  • educational
  • EFF
  • electricity
  • elementary
  • empathy
  • Enlightenment
  • enzo tedeschi
  • EPDFView
  • epiphany
  • essay
  • Evince
  • exam
  • excitement
  • eye of gnome
  • F-Spot
  • facebook
  • Faenza
  • familiarity
  • family
  • FBI
  • Featured Comments
  • fedora
  • Fedora Core
  • Feedbooks
  • felicia
  • Fermat's Last Theorem
  • Ferris Bueller's Day Off
  • fifa
  • file sharing
  • first
  • First Amendment
  • first sale
  • Fluxbox
  • Folder View
  • FOLLOW-UP
  • football
  • ford
  • free software
  • FreeTechie
  • frequency
  • FreshOS
  • frisk
  • frivolous
  • Fuduntu
  • Fusion
  • future
  • FVWM
  • Gabrielle Giffords
  • Gauss
  • GDM
  • gentoo
  • George Lucas
  • GhostBSD
  • GIMP
  • Gloobus
  • gloria
  • glyn moody
  • gm
  • Gnash
  • gnome
  • GNOME 3
  • GNOME Activities
  • GNOME Shell
  • gnu
  • Gnumeric
  • google
  • Google Docs
  • Gottfried Leibniz
  • government intervention
  • gparted
  • graduation
  • graphics card
  • GRUB
  • gtk+
  • GUI
  • gwenview
  • gwibber
  • Hackers
  • happy new year
  • hardware
  • Harry Potter
  • health
  • heartbeat
  • Higgs
  • high speed rail
  • hollywood
  • homeland security
  • homeless
  • honda
  • How To Train Your Dragon
  • How-To
  • hp
  • HTC
  • HTML
  • i386
  • ibm
  • Ice
  • Iceweasel
  • identity
  • In Defense of Food
  • incentives
  • Inception
  • india
  • Infinite Monkey Theorem
  • Inside Line
  • installation
  • Intel
  • intellectual monopoly
  • intellectual property
  • internet explorer
  • internship
  • Investopedia
  • ipad
  • iphone
  • iphone OS
  • ipod touch
  • Iraq
  • iron man 2
  • Isaac Newton
  • isadora
  • issues
  • ITworld
  • jailbreak
  • James Bond
  • james cameron
  • japanese
  • jill sobule
  • jim lynch
  • jon
  • Julia
  • Julian Assange
  • justice
  • KahelOS
  • Katya
  • KDE
  • kde 3.5
  • KDE 4
  • kde 4.4
  • KDE 4.5
  • KDE 4.6
  • KDE 4.7
  • KDE Activities
  • KevJumba
  • keyboard
  • Kinect
  • KOffice
  • kolourpaint
  • Kongoni
  • konqueror
  • Kopete
  • Kororaa
  • kpackagekit
  • KPresenter
  • kubuntu
  • kwin
  • Lage Raho Munna Bhai
  • laptop
  • last week of school
  • Latvia
  • law
  • lawsuit
  • learning experience
  • LED
  • legal fees
  • lenny
  • Leonard Mlodinow
  • LG
  • liar
  • libel
  • liberal
  • LibreOffice
  • LILO
  • linux
  • linux live cd
  • Linux Mint
  • Linux Today
  • Lisa
  • live cd
  • live dvd
  • live usb
  • long
  • Lubuntu
  • lunatic
  • LXAppearance
  • lxde
  • LXPanel
  • mac
  • mac os x
  • Madbox
  • madurai
  • Mageia
  • mainstream tech press
  • malware
  • mandriva
  • Manjaro Linux
  • marginal cost
  • mark shuttleworth
  • Mark Zuckerberg
  • market
  • market share
  • massacre
  • mastery
  • MATE
  • mathematics
  • Mayans
  • MBodhi Linux
  • mcps
  • meat
  • mebibyte
  • media
  • media companies
  • medicine
  • MEEP
  • Megabus
  • megabyte
  • mepis
  • Metacity
  • metric system
  • MGSE
  • Michael Nielsen
  • Michael Pollan
  • mickey mouse
  • microsoft
  • microsoft office
  • middle
  • Midori
  • misconceptions
  • misrepresentation
  • MIT
  • MLB
  • Mokshagundam
  • money
  • monopoly
  • mouse
  • movie
  • Movie Review
  • Mozilla
  • Mozilla Firefox
  • Mozilla Prism
  • mpaa
  • multiboot
  • MultiSystem
  • MWM
  • national health service
  • national security
  • nautilus
  • NCAA
  • ncurses
  • netbook
  • Netrunner
  • neutrino
  • new computer
  • new york
  • new york city
  • new zealand
  • newbie
  • news corp
  • NFL
  • NHS
  • NIST
  • normal distribution
  • novell
  • numbers
  • nutrition science
  • nutritionism
  • NVidia
  • NZCS
  • obama
  • okular
  • One
  • open standards
  • open-source
  • openbox
  • openoffice.org
  • opensolaris
  • openSUSE
  • Opera
  • oracle
  • Oxidized Trinity
  • P. W. Singer
  • panel
  • paramount
  • Pardus
  • parenting
  • parody
  • particle
  • patent
  • pay-to-pirate
  • PC-BSD
  • pclinuxos
  • pcmanfm
  • Pear OS
  • pearson education
  • Peppermint OS
  • Peter Pan
  • philosophy
  • Photograph 51
  • photonic
  • PHP
  • physics
  • pidgin
  • Pierce Brosnan
  • Pinguy OS
  • pink
  • Pink Floyd
  • piracy
  • plasma
  • plasmoid
  • poll
  • Porteus
  • power
  • power law
  • prejudice
  • presentation
  • president
  • president obama
  • presumption of innocence
  • Princeton
  • printing
  • prisoner
  • privacy
  • profit
  • programming
  • progress
  • Project Natal
  • promotion
  • proprietary
  • public domain
  • purpose
  • qed
  • QEMU
  • qt
  • quantum electrodynamics
  • quantum mechanics
  • radio
  • rape
  • Rawhide
  • Razor-Qt
  • red hat
  • Reflection
  • Rekonq
  • religion
  • Remastersys
  • rent
  • repossession
  • republican
  • retroactive copyright
  • review
  • rewards
  • RHEL
  • RIAA
  • Righthaven
  • RMA
  • robotics
  • rolling release
  • rootkit
  • ROSA
  • royalty
  • RPM
  • RSS
  • rule
  • rupert murdoch
  • sabayon
  • safari
  • saints
  • Salix OS
  • Samsung
  • sarah palin
  • Saudi Arabia
  • scanner
  • school
  • school network
  • science
  • Scientific Linux
  • security theater
  • selection
  • semester
  • Semplice
  • senior
  • Shiki
  • shooting
  • Shotwell
  • shut down
  • SI
  • sidux
  • Simon Singh
  • simplymepis
  • Skype
  • skyscraper
  • Slackware
  • slander
  • slashdot
  • social media
  • social policy
  • socialism
  • software patents
  • solar
  • SolusOS
  • SolydXK
  • sony
  • sony-bmg
  • SOPA
  • Source Code
  • SourceForge
  • SPARC
  • special effects
  • spying
  • spyware
  • Squeeze
  • SSH
  • Star Wars
  • State Department
  • statin
  • statistical mechanics
  • Statler
  • Stella
  • steve jobs
  • stewart
  • Stuxnet
  • subscriber
  • subsidy
  • substitute
  • sun
  • Sun Tzu
  • super bowl
  • Super Bowl XLV
  • super user
  • Suresh Kalmadi
  • survey
  • Symbicort
  • synaptic
  • tablet
  • Talledega Nights
  • tax
  • tech company
  • Tech Drive-in
  • techdirt
  • Technorati
  • Ted Williams
  • terrorist
  • thanksgiving
  • The Adjustment Bureau
  • The Amazing Race
  • The Art of War
  • The Code Book
  • The Drunkard's Walk
  • The King's Speech
  • The Social Network
  • the tunnel
  • the undercover economist
  • thermophotovoltaic
  • thunar
  • tim
  • Tim Harford
  • tint2
  • torrent
  • Toy Story 3
  • toyota
  • tracking device
  • trademark
  • train
  • treason
  • Trinity
  • Trisquel
  • trivial
  • troll
  • TSA
  • TuxMachines
  • Twitter
  • TWM
  • UberBang
  • ubuntu
  • ubuntu one
  • UK
  • unetbootin
  • unintended acceleration
  • units
  • Unity
  • Unixoid Review
  • UROP
  • US
  • utopia
  • V. S. Narayana Rao
  • VectorLinux
  • vegan
  • vegetarian
  • Verizon
  • vesa
  • Viewnior
  • ViewPad
  • ViewSonic
  • violation
  • virtual desktop
  • VirtualBox
  • virus
  • Visvesvaraya
  • vlc
  • warfare
  • water
  • WattOS
  • wavelength
  • Wayland
  • web-connected printer
  • webcam
  • WebOS
  • weekly
  • whistle
  • widget
  • wifi
  • wiki
  • Wikileaks
  • William Shakespeare
  • windowing system
  • WindowMaker
  • windows
  • windows 7
  • windows vista
  • windows xp
  • Wired for War
  • word
  • WordPress
  • world cup
  • Wubi
  • x11
  • XBMC
  • xbox360
  • xfce
  • xkcd
  • xp
  • yahoo
  • yoga
  • YouTube
  • YSA
  • Zenwalk
  • Zorin OS

Blog Archive

  • ▼  2013 (63)
    • ▼  September (4)
      • Featured Comments: Week of 2013 September 8
      • Featured Comments: Week of 2013 September 1
      • Seventh Semester at College
      • Review: Elementary OS 2 "Luna"
    • ►  August (9)
    • ►  July (3)
    • ►  June (9)
    • ►  May (7)
    • ►  April (10)
    • ►  March (11)
    • ►  February (6)
    • ►  January (4)
  • ►  2012 (85)
    • ►  December (4)
    • ►  November (3)
    • ►  October (2)
    • ►  September (6)
    • ►  August (8)
    • ►  July (12)
    • ►  June (12)
    • ►  May (7)
    • ►  April (6)
    • ►  March (6)
    • ►  February (9)
    • ►  January (10)
  • ►  2011 (179)
    • ►  December (5)
    • ►  November (8)
    • ►  October (9)
    • ►  September (12)
    • ►  August (15)
    • ►  July (15)
    • ►  June (15)
    • ►  May (16)
    • ►  April (15)
    • ►  March (19)
    • ►  February (21)
    • ►  January (29)
  • ►  2010 (173)
    • ►  December (24)
    • ►  November (23)
    • ►  October (34)
    • ►  September (36)
    • ►  August (15)
    • ►  July (18)
    • ►  June (13)
    • ►  May (8)
    • ►  April (2)
Powered by Blogger.

About Me

Unknown
View my complete profile