The transistor was a hard act to follow. The 1956 Nobel Prize for the invention of the transistor signified more than just the development of a device. It helped usher in a new era in which our understanding of materials using both basic and applied science was to have a renaissance. In 1958, when Physical Review Letters was born, solid state/condensed matter physics (CMP) began its growth spurt that continues to this day. This field is now the largest branch of physics, yet it is probably fair to say that its practitioners can be viewed as the silent majority. The media emphasize astronomy, particle physics, and biology far more than CMP. Part of the reason for that emphasis is the public’s desire to know how it all began, how atomic bombs work, and how living things function. The considerable interest in computers and devices does shine light on some CMP topics and, now and then, discoveries such as high temperature superconductivity or Bose-Einstein condensation do get coverage, but anything involving Einstein is news.
Perhaps a lack of media attention isn’t so important when considering that, over the past 50 years, 21 Physics Nobel Prizes were awarded to the silent majority working in CMP and associated fields, like optics and instrumentation, and that four Chemistry Nobel Prizes were awarded for subjects in CMP. The breakthroughs were both basic and applied, reflecting the view of CMP researchers that many advances in the field are truly fundamental and that the applied research in their field has changed society.
In some sense, physics is the central science. Almost all scientists need to have mastered a reasonably high level of mathematics and physics, and often it is only when their science can be explained via physical principles that they consider it to be understood. Within physics, CMP is central in this sense. Many areas of science express their results in CMP language. CMP is also central since it is in the middle with respect to energy and size scales. Other branches of physics have certainly influenced all of physics, but I believe that CMP has the strongest links, not only to other branches of physics (particularly atomic, molecular, and optical physics), but also to other science and engineering fields. Nanophysics, which many regard as a branch of CMP, has numerous examples of collaborations between CMP physicists and chemists, biologists, electrical and mechanical engineers, material scientists, and computer scientists. Researchers in nanoscience from different disciplines are interested in the same general size scale, and much of the focus is similar to that of mainstream CMP with searches for novel properties and novel materials such as fullerenes, carbon and boron nitride nanotubes, and graphene.
Diversity is an essential characteristic of CMP. When condensed matter physicists are asked to list their most important problems, they usually can’t agree. The particle astrophysics community can, and there are influential reports listing the main problems of the field that usually number less than ten. Some theorists in particle astrophysics often list a similar number of fundamental questions to be answered. These lists challenge some bright students, who are inspired to enter the field to try to answer the questions. Such lists also point to needs that require support and in turn help to influence funding agencies. Although lists of this kind can be valuable, we condensed matter physicists too often try to imitate that approach instead of celebrating our valuable diversity. Certainly there are examples of important questions we would like to answer, such as how does one achieve room temperature superconductivity and what is the proper theory of superconductivity in the cuprates? But asking condensed matter physicists how to lay the groundwork for the entire field tends to be divisive, since the field is broad and has many different pillars of knowledge giving it support.
A systematic study of materials goes back to at least the age of the alchemists, who developed useful recipes and documented properties, but whose underlying concept—that matter was gray with a veneer that produced its properties—was wrong. The concept of atoms was essential, and the invention of quantum theory eventually led to quantitative explanations and predictions. In 1929, Paul Dirac’s  famous challenge suggested that solving the equations of quantum theory is the goal for much of physics. Major advances came a lot easier for atoms and molecules because of their sharp optical spectra, but the broad solid-state spectra were difficult to interpret. Although it took thirty to forty years before that problem was solved, other very significant advances in CMP were made soon after Dirac’s challenge. By the 1930s, major puzzles concerning the properties of solids, such as the electronic heat capacity of metals, were solved using quantum mechanics. In 1933, a review by Arnold Sommerfeld and Hans Bethe  gave an excellent account of the status of the field. It is still a useful summary and reads like parts of the undergraduate texts in use today. Solids were viewed as strongly interacting atoms whose electronic levels were spread into bands. Vibrational, mechanical, and structural properties could be modeled along with electronic properties. The models were usually simple and generic  ].
The interacting atoms model is still one of our most useful models of a crystalline solid. For many properties of solids, one can view the atomic core electrons as being inert, and it can be assumed that the valence electrons dominate the properties except at high enough energies to excite the core electrons. The valence electrons contribute to a sea of negative charge that can be itinerant or partially localized. Using this model along with concepts such as the pseudopotential [4 c5] , designed to allow the computation of electron-core (core=nucleus+core electrons) interactions and density functional theory  to calculate valence electron-electron interactions, it is possible to explain and predict many properties of a broad class of materials. This model and the concepts I’ve just mentioned are part of the standard model  for CMP. However, there is another, complementary approach that plays an essential role and forms the other part.
The elementary excitation model of solids can be viewed as an example of the philosophy of emergence. In the interacting atoms model, we are given the particles (electrons and cores), and since the interactions between them are electromagnetic and quantum mechanics is appropriate, a reductionist approach should suffice to answer all our questions. By contrast, in the elementary excitation model, the approach is to explain the measured responses to probes such as temperature and electromagnetic radiation using response functions such as the heat capacity and dielectric function. The particles can be fictitious, such as holes, phonons, magnons, etc. In some cases they can be quasiparticles that behave as fermions and resemble “real” particles like electrons or they can be bosonlike particles related to collective excitations such as sound modes. There are a few elementary excitations that don’t fall into these two groups, but for most applications, quasiparticles and collective excitations form a basis for using this approach. The fact that our knowledge all comes from responses to probes begs the question of whether the particles associated with elementary excitations are also real. If it is argued that reality is the description of what we sense, and we can describe what we sense with great precision using elementary excitations, then these “particles” may be viewed as real. Such an argument brings up questions regarding reality that were discussed by Ernst Mach, Albert Einstein, Henri Bergson, and others. The important point is that our modern view merges both the interacting atoms model and the elementary excitation model for CMP, and both models are used together to understand the physics of solids.
Since theory usually focuses on what we can’t do with our concepts, models, and approaches, considerable attention in CMP has been devoted to systems that may not fit the above paradigm. For example, what about noncrystalline systems and systems having electrons that are strongly correlated so that the usual nearly free electron methods are not appropriate? In some classic cases, the electrons are considered to be weakly coupled, and a Fermi free electron gas picture can be used. This approach has had great success in explaining material properties, especially for simple metals. It is also true that even for some more complex cases, like so-called correlated systems, where the electrons are interacting more strongly, the elementary excitation model based on Lev Landau’s theory of Fermi liquids can apply  . This theory allows an association of the electronic excitations with weakly coupled electronic states that resemble free electrons with some change in their properties, such as an effective mass, to account for some of the interactions. For strong interactions, it is expected that the Landau picture will be inadequate. One example is the Wigner crystallization of an electron gas at low densities. Also, for disordered systems, when the strength of the disorder is increased, it is possible to have Anderson localization  . Even in the regime where Landau theory works, when external forces such as lattice vibrations, magnetic effects, local repulsion, etc. act on electrons, many new features can occur. An example is the formation of Cooper pairs  that leads to the BCS theory of superconductivity  . The BCS theory is an example of a concept developed for CMP that is used to explain phenomena in other areas, such as nuclear structure .
So, conceptually, there are rich paradigms involving different philosophies that can be tested in CMP, and our understanding of the applicability and limitations of those concepts is continually evolving. From this point of view, the field is fresh and should continue to attract young researchers hoping to make new inroads and breakthroughs.
New theoretical methods and instrumentation for experimentation contributed significantly to the development of CMP during the past fifty years. Theorists borrowed methods of quantum field theory developed for relativistic quantum electrodynamics and converted them to techniques for application to nonrelativistic many-body problems. Modern computers with physical models based on concepts such as the pseudopotential and density functional theory enabled condensed matter physicists to explore properties of real materials with high precision  . The dream of inputting the atomic number, atomic mass, and possible crystal structures for compounds into a computer code to predict material properties has become reality for a number of systems with several atoms per unit cell. As computers become more powerful, the number of different atoms per cell that can be analyzed will increase. This would allow the consideration of systems with more complexity.
On the experimental side, the development of instrumentation has been tremendous. The dream instrument in the field of optics is a box emitting electromagnetic radiation with two dials—one for intensity and one for frequency. Using lasers and synchrotrons, radiation covering a large part of the electromagnetic spectrum can now be generated with variable intensities, and, in some spectral ranges, the radiation can be coherent. Using electrons and x rays as probes, atoms can be seen and structural information made available on a scale that has had an important effect on CMP and biology.
As I mentioned earlier, one can argue that a measure of the impact of CMP over the past fifty years is the many advances in the field honored with Nobel Prizes. The groupings in the list below are not the result of a scholarly study, and many people would characterize the awards in a different way than I have. I could have listed other awards for important CMP research too. Or I could have done a citation study. However, the point of listing the Nobel Prize winning research here is to show one measure of the vitality and success of condensed matter physics. It also gives a reasonable list of many of the highlights, and, to some extent, demonstrates the close interplay between experiment and theory that is characteristic of this field.
Some examples of Nobel Prizes related to instrumentation development are the techniques based on laser/maser principles and further developments in optical methods (1964, 1971, 1981, 1997, 2005). Another is tunneling and electron microscopy (1986). Like the transistor, broad developments related to device applications were also recognized (1973, 2000, 2007). Experimental and theoretical advances in superconductivity and the quantum Hall effect enjoyed wide attention with six Nobel Prizes (1972, 1973, 1985, 1987, 1998, 2003). Some low temperature quantum phenomena such as the properties of helium and Bose-Einstein condensation led to Nobel prizes for theorists and experimentalists (1962, 1996, 2001, 2003). Theorists were sometimes recognized along with the experimentalists making the discoveries (1998)  , and sometimes the prize was given for theory alone (1962, 1972, 1977, 1982, 1991, 2003; chemistry 1968, 1998). New materials also drive CMP, and these discoveries were recognized (1987, 2007; chemistry 1996, 2000).
The discoveries, inventions, and other accomplishments I just discussed were made at universities, industrial laboratories, and government sponsored laboratories. The decline in the industrial segment in recent years is exemplified by the reduction of condensed matter research at Bell Laboratories, formerly a Mecca for applied and basic research in CMP. Today, it appears that universities have the lion’s share of activity, and that situation presents some financial difficulties since the support needed in CMP is larger than in the past. Instrumentation is expensive, albeit not on the level of large accelerators, but if young investigators are to compete successfully, they need state-of-the-art equipment. Also, many theorists need large computer budgets. Although the case for increased funding is made using arguments related to future applications, many condensed matter physicists believe that the potential for important new knowledge and insights related to basic science are also excellent justifications.
Making intelligent predictions for the next fifty years is harder in this diverse field than in many others. Experiments have led the way, but, in some cases, even now, theoretical modeling can replace experimental investigations. I hope that theory will produce even better physical models related to observed phenomena. Perhaps theory will take a larger role in finding new phenomena such as new states of matter, but as in all of physics, the decisions will continue to be made by experiment. More connections to other fields are likely. For example, the study of quantum computation has involved CMP researchers in the broader issues associated with quantum information. On the applied side, developments in CMP have pointed to the need for the next generation of engineers to learn more quantum mechanics. There is also a growing interest in photovoltaics and other systems to enhance energy conversion and storage. So, there is no lack of basic and applied problems to challenge us.
There are strong arguments to support the assertion that CMP has vitality and “legs.” The field is robust and broad, and it shows no signs of slowing down. I am certainly optimistic, and my optimism grows every time a new interesting material or property is discovered.
In conclusion, although the transistor was a hard act to follow, I believe that CMP was up to the task, with PRL playing a significant role in its development. Speaking personally, I was trained to do the best research I could do and to try to “get a PRL” to announce what I had done. My second publication as a graduate student was a PRL, and I remember being incredibly proud. Although I’ve had the good fortune to publish about 100 PRLs since then, the thrill is still there when I get an acceptance notice. I’m sure that most physicists feel the same way.