Thursday, 22 July 2021

Pictures of home

Our home, Earth, is a rare and peculiarly beautiful planet: a bright sphere effortlessly pirouetting through the black. In reality, it’s constrained within a routine choreographed by gravity and conducted by its home star, the Sun. Moreover, it doesn’t dance through space-time alone but with a partner in the form of the Moon – the relationship between them as unusual as the planet itself. Even the Sun is unusual within its peers. Our home planet is at one and the same time just like all the other countless planets in orbit around trillions of stars and yet, as far as we know, so unusual that we might even be tempted to call it unique. The mere fact of our presence on its surface, beings able to ask the searching questions we do, marks it out as special.

I have written about the Earth before (here), and during lockdown I recorded a talk on the subject (find it here). I’ve no intention of revisiting this material in detail – there would be no point – but I am going to celebrate the Earth in another way by sharing with you some of the images captured during my lifetime that have had a particular impact on me in one way or another. (I note in passing that the entirety of humankind’s rocket-based space exploration endeavours thus far have occurred during my life.) Given the thousands of beautiful, informative and sometimes shocking pictures taken from orbit – through the windows and lenses on the International Space Station for example – one might become a little blasé, or perhaps overwhelmed, by the choice on offer. Fear not, I am side-stepping them all; rather than study these ‘close-ups’ I want to share with you my enjoyment of the long-shots: the images captured from afar which reveal the whole Earth in its role as a rocky, water-rich ‘Goldilocks’ planet within our solar system. I might allow the Moon a look-in as well …

Although not the first picture of our home chronologically speaking, my all-time favourite image is arguably one of the poorest in terms of photographic quality: the so-called ‘Pale Blue Dot’ captured by Voyager 1. It was taken just before its cameras were turned off to conserve power as it headed towards the very farthest reaches of the Sun’s dominance and thence into interstellar space. Voyagers 1 and 2 were launched separately during the summer of 1977 (see here for more information; I was in my mid-twenties at the time!). Both were designed and built to provide the first close-up look at Jupiter and Saturn; Voyager 2 would in addition fly on to Neptune and Uranus, the outermost planets in the Solar System. The mission was to last five years … it’s still in progress four decades later, adding new science to an already astonishing portfolio. That fact alone sets the Voyagers apart in my imagination, but there’s so much more: to have designed a mission of such complexity on what was, relatively speaking, a modest budget and with the very basic electronic computers available at the time remains a triumph of scientific and technological endeavour. I still keep in touch with the mission’s progress via the Voyager Twitter feed.
On February 14th 1990, Voyager 1 was instructed to turn its camera and photograph the planets of our solar system whilst it was still just about possible. It was by then six billion kilometres from Earth. At this distance Earth appeared so small and so close to the position of the Sun that it was barely possible to capture anything at all. As it was, Earth represents only about 1/8th of a pixel and sits within optical artefacts – the streaks of light – caused by the much brighter sunlight entering the camera via its colour filters. The Sun itself is just out of shot. Venus was similarly difficult to capture; neither Mercury nor Mars could be imaged at all because of their positions with respect to the Sun. (I've added arrows to help you see the tiny bright dot that is Earth.)

‘The Pale Blue Dot’, in spite of its limitations, easily remains at the top of my list of the all-time-great pictures of our home planet. However, an image captured much earlier in the mission (far left) hovers somewhere nearby in terms of its ability to captivate my mind, my imagination. Only a couple of weeks after its launch, Voyager 1 took this picture of the crescent Earth and Moon from a distance of 11.7 million kilometres, and in the process illustrated the mission’s later potential. This was the very first time that the Earth and Moon had been captured in a single image (- the Moon is upper left; it's relatively dull compared to the bright Earth). Many analogous pictures have emerged over the years, each special in their own way. In the centre is a 2017 greyscale picture captured by the ORIRIS-Rex probe on its way to rendezvous with an asteroid. (Because the Earth is so much brighter than the Moon, the image has been processed such that the Moon’s brightness is enhanced by a factor of three.) In 2010 the Messenger probe, sent to study the solar system’s innermost planet, Mercury, looked back out and away from the Sun to capture the view of the Earth⸱⸱⸱Moon duo shown on the right. When viewed from the vicinity of Mercury, both the Earth and the Moon will always appear as bright full discs – no elegant crescents from this perspective. There are so many other broadly similar Earth⸱⸱⸱Moon shots of this type available online; I’ve pruned my selection heavily for this blog and in the process cut superb images such as those shown here and here.

If the ‘Pale Blue Dot’ tops my personal list of iconic pictures of home, then a very close-run second place must go to one of the staggeringly beautiful images captured by the Cassini probe during its extended sojourn at Saturn. This one (top) was taken in 2013 with Saturn backlit – in other words we’re looking at its night-time face with sunlight scattering through the ring system and outer atmosphere. A lower right segment of the wide-angle full image, shown lower left, reveals the Earth peeping out from behind the rings (there’s an arrow to help you locate it; Earth is about 1.4 billion kilometres away and at that distance each pixel of Cassini’s camera covers almost 9000 km of the Earth’s surface). Enlarged further in the lower right panel, we can easily discern the Earth⸱⸱⸱Moon duo. For sheer majestic beauty it would be hard to beat such a picture of home. In passing, note the apparent absence of stars in these pictures. The reason for this is simply that they are too faint to show up in an image focused on such a bright target.

Moving a little closer to Earth again, I’ll share with you a picture of home that also captivated me: an image taken from the surface of Mars by the Curiosity rover in 2014 (here). It’s nothing special in terms of photographic quality, but it’s reminiscent of the sort of photo one could take of Venus, say, from here: from the surface of one planet, a bright point of light in the evening sky which turns out to be a neighbouring planet. Almost homely. The central panel shows a reprocessed version of the picture with an enlarged inset of the Earth⸱⸱⸱Moon system. Taken at these sorts of distances we can be sure that the relative sizes of the Earth and its moon are about right; although even here one needs to consider the fact that the Moon and the Earth orbit each other such that their separation in space as viewed through a camera lens may appear to change. (Technically, they both rotate about their Barycentre – their centre-of-mass; see my blog post or YouTube video on the Earth for an explanation.) For pictures captured closer to home, like those shown in the Voyager/Osiris/Messenger images above, one needs to be even more careful since the apparent relative diameters may be heavily influenced by their respective distance from the camera. On the right hand side above I have inserted an image I captured on my smartphone of the crescent Moon and crescent Venus: Venus is, in reality, far larger than the Moon – but it is of course much further away from my phone than is the Moon, even if it does appear in the same part of the night sky above my garden.

An interesting thought poses itself in the context of these images of Earth from the Martian surface: how would one observe Earth through a telescope in the way we might observe Mars? The same considerations pertain for Earth observation from the lunar surface. One would be unable to look directly through a telescope eyepiece of course since there would be a spacesuit visor in the way. For a more considered exploration of this ‘thought experiment’ I recommend an article in the ‘Sky at Night’ magazine, here.

I have followed the development of space exploration from about the age of six – I remember Sputnik and Telstar, the first animals to travel to space (and die there) and the start of crewed missions – and was in my mid-teens when the first astronauts left their footprints on the Moon. Although the video images beamed back to our monochrome and distinctly low-resolution TV screens were epoch-defining, they were nothing to write home about in terms of image quality. However, the images taken by crew members …
The Apollo 8 mission involved using the Earth’s and the Moon’s gravitational field in order to do a loop around the Moon before coasting back to Earth; the crew practiced most of the manoeuvres necessary for a Moon landing without that final all-important stage. In the process, they captured a series of pictures which have become truly memorable. Later missions, through to Apollo 17, added to this collection and/or improved picture quality. Two of the more iconic pictures of home are shown above: ‘Earthrise’ and ‘The Blue Marble’. The latter is fairly self-explanatory, although it’s worth pointing out that the inclusion of a view of Antarctica was a novelty at this stage of the game. ‘Earthrise’ does need some discussion though. The fact of the matter is that Earth never actually rises: the Moon is tidally locked to the Earth, meaning that the same lunar face is always pointing toward us – how could we talk of the far side of the Moon otherwise. So, if one face is permanently facing Earth then it follows that the Earth is always visible above the horizon from half the lunar surface and always invisible from the other half: there is neither rising nor setting. (Just to amuse yourself, take a look at this gif which shows the Moon ‘photo-bombing’ Earth – what we can see here is the far side of the lunar surface, the side we can never see directly from our planet.) However, as the Apollo crew were orbiting the Moon it would appear from their perspective that the Earth rose and set each time they went around. The image on the far right is the actual orientation as seen from the orbiting command module; only by rotating the picture was it possible to present to us the final evocative picture of home as seen on the left. Unsurprisingly, ‘Earthrise’ became a poster-shot for the growing environmental movement of the day.

I might have stopped at this point had the chair of the Ashford Astronomy Association*, Jason, not reminded me of another series of spine-tinglingly good shots of our home world taken by Apollo mission crew. There exists some excellent video footage of the astronauts’ ascent from the lunar surface to re-join the orbiting command module (see here for example), but some of the stills are truly astonishing. The image above is one such: Moon in the foreground and Earth, home, shown in the distance.

I hope you have enjoyed my little gallery of pictures of home; feel free to share your own.

* I joined this lovely club a few months before ‘lockdown’; we’ve been meeting via Zoom ever since. I’ve had lots of good advice from its more experienced members which has probably saved me countless hours of trial and error when trying my hand at astrophotography, and some pretty decent suggestions for beautiful things to observe.

Wednesday, 19 May 2021

Conversations: from screen to woodland

Many years ago I discovered the joy of sharing my love of science with non-experts: initially within schools (all the way to Years 4 and 5, e.g. here) and then to lay adult groups. Since retiring I’ve focused primarily on the lovely retired or semi-retired people of the U3A, both in my home town (e.g. here)and more widely e.g. here). I have learnt, and continue to learn, how to communicate science-based topics. Spending more than four decades speaking at meetings and conferences, and three of those teaching physics students, has brought me to the point of being able to perform satisfactorily – yes, teaching is in part a performance art – and to enjoy the process. I’d now miss being able to talk to people about science and being a scientist. All this has taken place against the background of being a social/thinking introvert (see here for an explanation of these terms).

However, woven through those Science Communication activities has been a less conventional thread: invitations to join projects associated with festivals (e.g. here and links therein) and others coming from the arts (e.g. here or here). Each of these latter activities has taken me away from anything I might describe as my comfort zone; they have been ‘scary’ at one level or another, but also immensely rewarding. There have been fewer such opportunities since I retired, which is both understandable and a little sad. In a year of SARS-COV2 lockdown I confess to expecting nothing of the sort to come my way. What a lovely surprise, then, to have had two invitations arrive: it is these two activities – a recorded conversation for a new podcast series using Zoom and a conversation about and within the natural world as part of a broader philosophical project.

Towards the end of June 2020 I had an email from Dan Harding. I am a long-time admirer of his work as Director of Music Performance at my old university and I follow his Twitter stream and music blog assiduously. Exactly as I had done in the context of the U3A, he too had decided that lockdown required of him another new venture: this one would comprise a series of podcast conversations on the theme of creativity within the pandemic. The series title captured the essence of the experiment very well in my opinion: ‘Zoom for Thought’. His email invited me to participate in the first episode. I went through my usual list of reasons why I absolutely couldn’t do this – it’s a long list – but said “Yes” anyway. A conversation with Dan is always a pleasure and this screen-mediated one was no exception, despite its novelty. I came away pleased I had agreed to do it and uplifted by the fresh recognition of so many aspects of creativity that transcended the differences between our respective areas of expertise. There have been many guests since, all more erudite and accomplished in areas I am not, and the podcast is now in its second series: do listen if you have a mind to.

“The first episode features a conversation with Bob Newport, Emeritus Professor of Materials Physics at the University of Kent. In which we talk about finding creative ways of continuing to explore physics at home, engaging listeners during lockdown in both science and music, finding teaching tools within the home and grappling with the exciting unpredictably of a dodgy wifi signal... "I'm an experimental scientist, after all: I should be good at putting odd things together," Having retired from teaching at the University of Kent, Bob now teaches for the University of the Third Age in a series of blog articles and videos. Since lockdown, Dan has been engaging the community of musicians at the university through the Virtual Music Project, creating a series of recordings of music by Vivaldi and Mozart from recordings made in isolation…” Taken from here.

Skip forward to March 2021 and a message from Sarah Dance via Twitter. I first met Sarah many years ago in connection with one of the less conventional science communication avenues mentioned earlier; she works in the arts and creative industries (see here) and has been based in my part of the UK for a couple of decades. She was playing match-maker, and her follow-up email ‘e-introduced’ me to Russell Burdon in the context of his current philosophical artistic project (see here). To paraphrase the project’s website, his art residency seeks to deliver responses to the landscape and its biodiversity and history as he finds inspiration – and this inspiration may be fed from across intellectual disciplines and media types … enter yours truly, stage left, scientist.

Russell and I met a few weeks later on a sunny day and spent almost two hours walking and talking on and around the Crab and Winkle Way, mostly on pathways through mixed woodland. I say ‘walking’ as though there might have been planned and purposeful progress, but in truth there was a lot of strolling and quite a bit of standing still. The conversation let up only when an unusual butterfly or colourful patch of wild flowers held our gaze for a while. We had some things in common but by no means all, and together with a willingness to be open that admixture led to an unpredictable but wholly positive flow of words in pursuit of understanding. I’m not sure whether my contribution from the perspective of a scientist in retirement will prove useful within Russell’s project but, either way, I’m looking forward to seeing what eventually emerges.

In his novel ‘Till we have Faces’, a re-telling of the classical myth of Cupid and Psyche, C.S. Lewis uses the phrase “words going out to do battle with words”; sometimes words play a far, far more constructive role.

As the 1990’s TV advert told us: “It’s good to talk”.

Saturday, 3 April 2021


This is in a very real way a postscript. My previous post, Mizar, marked something of a turning point in my pursuit of astronomy and then astrophotography. I wrote of a childhood hobby, carried out on a pocket-money budget and before personal computers were dreamt of, now reborn in retirement with a modest, but nevertheless much-improved budget. The evening’s observations that had given rise to that post had seen me reach a place of comfort with telescope, astrocam, software … in fact, the whole kit and caboodle – a place of more confidence than happenstance or happy accident. There remains more to learn than is already learnt, that probably goes without saying given my continuing status as a novice but I have at last passed my ‘driving test’. Thus, with a second clear and still night presenting itself in as many weeks – a rare coincidence this past winter – the temptation to set everything up again and try out my new skills was irresistible.
Gemini: a set of stars at all sorts of distances from the Earth – i.e. not bound to one another in any meaningful sense other than that they all reside in our galaxy, The Milky Way. Gemini is an aphorism, not a constellation. Their brightness and apparent proximity suggested particular shapes to our forebears; this is a much-studied form of ‘pattern-recognition’ (see here for a brief article outlining current thinking on the process.)    (Image created from ‘Stellarium’, a free-to-download computer package allowing one to generate bespoke star maps.)

Serendipity provided me with a ready-made target. The vice chair of the amateur astronomy society I joined shortly before the SARS-COV2 virus appeared in the UK (Steve, see here for details) had recently posted a detailed review of the night sky in spring. Within this was a section on the aphorism Gemini. In particular, it mentioned the constituent star Castor, a binary system much like Mizar in some ways. It has two principal components and a more distant third – each of which is itself a binary, the partners all being too faint and too close to observe directly; so six stars in all, just like Mizar. Six stars, five orbits; all taking place about 51 light years away (see diagram below). This was interesting in a generic sort of way, but the key factor for me was that the two brightest stars orbit each other at a relatively small distance. Indeed, from Earth, their angular separation is only about 6 arcsec*. Thus, I had a pretty good test subject for the claim I made in my post on Mizar: that I could probably fully resolve objects only ~5 arcsec apart. The game was on.

Each of the three paired partners orbiting each other, that’s three; Castor A and Castor B orbiting each other, and then finally AB and the two Castor C partners orbiting each other. Again, highly analogous to the Mizar ‘sextuplet’: a beautifully complex dance to gravity’s tune.

Castor was going to be pretty high in the sky at the times I was planning to observe it so, by keeping the tripod and mount relatively low to the ground, extending the legs only in order to level the mount, the small finder-scope was still easily useable without needing to balance on steps! The image was taken using the red-light torch bought for me by my daughter; it’s a light that doesn’t wreck ones night vision. By contrast, right at the top of the image is one of the nearby street lights that can be quite a nuisance, although less so than external security lights and passing car headlamps. Unfortunately, to get a view of both Polaris (for Polar Alignment of the mount) and of the southern sky for a lot of the interesting stuff, the front of my drive is pretty much my only option. On the positive side, it’s led to several lovely (socially distanced) conversations with passing ‘night owls’. The picture on the right is of my laptop display as I located Castor – the A⸱⸱⸱B stars were immediately visible; I’ve circled the pair. I confess to staring at the screen for a minute or two; in part this was the pleasure of seeing two well-resolved stars, but it was also in growing celebration of the fact that I had at last ‘cracked’ polar alignment – there was only minimal drifting of the stars on the screen.

This is the final stacked image showing Castor A and Castor B (lower right); an enlarged version is inset (upper left). The image represents the best 30% of 5000 40 ms sub-exposures. The two stars (or rather, the two pairs of stars – their respective partners being too faint and close to observe directly) are well-resolved: my system’s estimated resolution would seem to be entirely justified. The labelled figure shown below will clarify this further I hope. Castor A and B orbit each other with a period of about 445 years; their partners, Aa⸱⸱⸱Ab and Ba⸱⸱⸱Bb have orbit times in the region of 9¼ and 3 days respectively.

Given my fixation on the relatively bright near-neighbour Castor A and Castor B pair I confess that I didn’t even look for the more distant Castor C. Given that the pair are classified as cool dwarf stars and have luminosities less than 10% of the Sun’s, and that they eclipse each other as they orbit around their common centre of gravity (their barycentre: see my earlier post here) it hardly seemed worth the search. I was wrong. What I should have done was to collect data at longer exposure times, ignoring the fact that the A and B stars would be over-exposed; nevertheless, by increasing the brightness of the stacked image in my old version of Photoshop it was still just about possible to make out the third component: Castor C. This is shown in the inverted (‘negative’) image below. Caster C orbits the AB system every 14,000 years approximately; the pair of stars making up this faint companion orbit each other in a little under 20 hours!

The labelled and annotated figure above summarises the results of my evening’s observational experiment; the images have been inverted ('negative') in order to show Castor C more clearly.

Facts and figures beyond those offered in the stargazing guide referred to at the opening of this post were taken from two online sources: here and here. Both sites contain a wealth of additional information should you wish to learn more. One additional fact with which to close the post: because the orientation of their orbit is not face-on to the Earth, the angular separation of Castor A and B varies with time: in 1970 it was only 2 arcsec and by 2100 it will reach a maximum of 6.5 arcseconds.

* From one horizon across the arc of the sky to the opposite horizon is 180º (i.e. half a circle), but a degree is too large a unit for many purposes, so we may divide it into sixty arc minutes (arcmin, sometimes simply ′). The Moon and the Sun both appear to be about 30 arcmin across for example, ½º. Even an arcmin is too large on occasion so we may divide that again into sixty arc seconds (arcsec or ′′). (This extract taken from my previous post, here.)


Friday, 26 March 2021


My father was part-way through his apprenticeship as a bricklayer when the light escaped from its star and began its journey across space. He had left school in his early-teens, as was the norm for working class kids back then. War in Europe was brewing, and although he didn’t yet know it he’d soon be lying about his age so that he could sign up for the parachute regiment; his apprenticeship would be put on hold for the duration. Skip forward eighty three years to the present day; to be precise, to the tail-end of the day on which the Sun had set on 2020-21’s astronomical winter: the night before the Vernal/Spring Equinox. The sky was clear and there was only a slight breeze: almost perfect for a bit of star-gazing. Within the hour I could be found in my front garden looking back in time, in a manner of speaking, to those late 1930s. My target for the evening was Mizar in The Plough, a part of Ursa Major. Mizar is a binary star system which sits at about 83 light years (ly) from us – in other words, the light which fell into my telescope that evening left the stars’ surface 83 years ago. As it happens, the Mizar binary system is more intriguing than it appears at first sight: it has a few surprises up its metaphorical sleeve.

On the left is Ursa Major – the Great Bear. Technically, we ought to be calling it an asterism rather than a constellation since it has more to do with our innate human tendency to invent patterns where there are none than it does to an objectively fixed shape. Although some of the stars in Ursa Major are at roughly the same distance from us (in the region of 80-85 ly, light years – the distance light travels in eighty years) and have similar trajectories in space, others do not. Thus, in times past and in times yet to come what we might perceive as a bear would have appeared/will appear quite different. Mizar forms a particularly noticeable star within part of the large asterism of Ursa Major – it’s the kink in the prominent arm of what is commonly called The Plough (or Big Dipper, or Saucepan, or …). The images used above are taken from ‘Stellarium’, a free-to-download computer package allowing one to generate bespoke star maps.

My primary goal since beginning to dabble in astrophotography after my retirement has been to gain reasonable images of each of the planets in our solar system; to this I might add a long list of lunar features and a sunspot or two. I’ve made a start (see former posts here and here) but there’s so much more I want to do. Moreover, along the way I’d also like to capture images of a few more distant objects: galaxies, star clusters, nebulae. All this requires that I gain an understanding of what my telescope + astro-camera can achieve and the ways in which I might realise that potential. For instance, the telescope’s focal length and magnification, and the size of the CCD chip in the camera together limit the effective field of view (FoV) available to me and define the resolution achievable. Again, in the post immediately previous to this one I mentioned the need to align the telescope’s mount correctly using the Pole Star (Polaris) so that an object might be accurately tracked across the sky as the Earth rotates beneath. This is a task rendered difficult and uncomfortable, even painful, when one tries to manage it with creaking knees and an inflexible spine – finding a way to mitigate this remains on my ‘to do’ list. (I'm tempted to remark that the designers of the polarscopes on the market have really missed a trick here: building in a mirror or prism to allow the eyepiece to face outward or upward would solve this problem at a stroke.) I also expressed the hope that I could, with my smartphone attached to the telescope, use one of the many sky chart apps as a way of navigating to particular target objects. Thus it was on this particular evening that I decided to do some experiments in order to explore the issues. Given the aim, what follows almost necessary contains a technical element to it; hopefully not too much.

Although I currently see no way around the physical discomfort of the polar alignment process, I did at least achieve some success this time around – and the data to verify that statement. The results weren’t perfect, but I think I understand why. Objects initially centre-screen were therefore still drifting slowly off. Next time. However, by focusing on a star and generating images as a function of the exposure time, one can see when the image ceases to be circular and develops an elongation due to the inaccurate tracking. That is to say, we can ask how long an exposure is possible before there is blurring due to the star’s apparent motion across my laptop screen. Now, for all astrophotography it is necessary to collect a number of individual sub-exposures and then stack the best of them to generate a final image since this allows one to reduce the issues of atmospheric turbulence. For the purposes of this exercise I collected 500-1000 ‘subs’ and typically used the best 30% of them. The results are shown below.

These images show Mizar A and B (see below) with sub-exposures set to 51, 67, 130, 300 and 500 ms (milli-seconds) respectively left to right. The three shortest exposure times generated final images too faint to show up clearly in a blog post and I have therefore artificially altered the brightness. Only at 500 ms (i.e. ½ s) do we see evidence of distortion in both stars. Improved polar alignment will hopefully allow the extended exposure times of a few seconds required for more faint objects. However, even the results shown here are sufficient for the solar system targets I have in mind, and a great deal more besides. (I ought to note that the camera focus, a critical preliminary stage in the overall experiment, was achieved using a Bahtinov mask as discussed in earlier ‘stargazing’ posts.)

The images shown above give us a first insight into the complexity of the Mizar binary system. Now, the binary star that one is supposed to be able to see with the naked eye, but which in practice requires good eyesight and the absence of light polluted skies – or the use of a pair of binoculars – comprises Mizar and the fainter star Alcor. They are at a distance of approximately one lightyear (1 ly) from each other and are therefore only weakly bound together gravitationally; they therefore take several hundred thousand years to orbit each other. It turns out that Mizar and Alcor are themselves both binary stars, and whilst the dwarf star associated with Alcor is too small/dim to be seen directly, the Mizar binary – Mizar A and Mizar B – can be resolved even with my amateur setup. These two stars, one much brighter that the other, orbit each other every 5000 years or so at a very close distance – only about ten times the distance from the Sun to Pluto. It takes about 5½ hours for the Sun’s light to reach Pluto, meaning that the Mizar A⸱⸱⸱B distance is only about 55 lh (light hours; the Earth orbits the Sun at a distance of approximately 8½ light minutes). The stars are all relatively young at about 370 million years; by comparison, the Sun is approximately twelve times as old at 4½ billion years.

The above is an inverted image of the view I captured. (I inverted it simply because it’s easier to see ‘black’ stars against a pale background; the superimposed circles should also help.) Mizar A and B are on the right and Alcor – the principal component to the binary system visible with binoculars – is on the left. The figures shown below will hopefully be easier to follow as I’ve added labels. The image is the result of stacking the best 40% of 1000 individual 75 ms frames, so 30 s in total. I probably ought to have collected more data, but the evening’s experiment was about testing my stargazing equipment rather than achieving polished pictures.

We’re not finished with the complexity of this system yet: both Mizar A and Mizar B have been discovered to be binary stars in their own right (Aa and Ab, Ba and Bb should you have a desire to label them). These pairs have orbital periods as short as three weeks or so. As with Alcor’s partner star, they are not visible using anything other than the most sophisticated equipment. However, it’s still an interesting star system to have examined: not a simple binary pair at all but rather a multi-star system – a sextuplet if you will. I was delighted to have seen three stars within the system although being able to add something of their ‘backstory’ enhances the fun. There are several good descriptions of the system available online, e.g. here, here and here .

We need to cover a few preliminaries before I can tell you what the results reveal. Specifically, we need to understand the measurement scale used to describe the apparent separation of two objects in the sky. From one horizon across the arc of the sky to the opposite horizon is 180º (i.e. half a circle), but a degree is too large a unit for many purposes, so we may divide it into sixty arc minutes (arcmin, sometimes simply ′). The Moon and the Sun both appear to be about 30 arcmin across for example, ½º. Even an arcmin is too large on occasion so we may divide that again into sixty arc seconds (arcsec or ′′). Having got that under our belt, the diagram below shows the results obtained.
The Mizar⸱⸱⸱Alcor separation as seen from the Earth is about 11.8 arcmin. This separation gives me the chance to measure experimentally my telescope+camera’s field-of-view. On this basis I estimate my equipment’s field-of-view – the amount of the sky displayed on my laptop screen – to be 16 x 9 arcmin. This is small; it’s no wonder I have to make mosaics in order to create an image of the Moon. The Mizar A and B pair offered one more useful measurement. When viewed from the Earth they appear separated by 14.4 arcsec; this suggests that I can hope to resolve objects separated by only 4-5 arcsec. I’m quite happy with that.

Dad would have enjoyed reading this post, not because he was into astronomy as such, but nevertheless ...  Only later in my life, when I had watched my children begin to shape their lives, did I more fully realise how important his support had been as I moved inexorably towards a life so different to his own. This included helping me to buy my very first telescope, which I still have. It therefore seems entirely natural to think of him as I reflect on my evening with Mizar.

Saturday, 6 March 2021

More pictures of a stargazer

Back in August 2020, whilst those in the UK who were not designated ‘clinically extremely vulnerable’ were enjoying a transitory relaxation of covid-19 lockdown rules, I posted a synopsis of my early attempts at astrophotography (here). This was my celebration of a return in retirement to the hobbies of childhood and teenage: astronomy and the geeky side of photography. I remain moderately pleased with the images I shared then and offer now, even though they bear no comparison to those readily available online from professionals and experienced amateurs alike; the essential point is that they are my images. The targets were tracked down by me (without the benefit of an automated navigation system, often referred to as a ‘GoTo’ system) using a telescope I had set up; the images were captured to my laptop’s hard-drive and the data processed using the desktop in my study. It’s personal. Having said that, most of the targets were fairly easy to find and I used only the necessary basic levels of data processing software. This seems like an opportune moment to record and reflect upon my progress … such as it is, given the dearth of suitable conditions for observing during this past year. The fact of the matter is that I not only need a clear sky but also the near-absence of wind in order to get anywhere at all. Even when both criteria are ostensibly satisfied there may be too much turbulence in the upper atmosphere to achieve anything much. Nevertheless, fun has been had and I continue to take baby steps forwards.

New images.
Top row left to right: Tycho, an impact crater near the Moon’s South pole; Mare Nectaris, the Sea of Nectar lava plain; Clavius, an impact crater (note the bright ‘rays’ coming from it: these contain reflective glassy materials).
Middle row: the Mizar A-B binary star system in The Plough; Uranus (image expanded).
Bottom row: M42, the star-forming nebula in Orion (my second attempt, the first being in my previous post); Betelgeuse, a red giant star in the same constellation (image expanded). The green glow in M42 comes from the oxygen atoms in the cloud glowing as they are bombarded by light from those four bright central stars – it’s the same physical process that gives us the aurora on Earth.

The topmost video is of a single star imaged onto my laptop screen; the evening had been perfectly still when I was setting up then a breeze started: only 7 mph, but with modest gusts - the results are self-evident. Even on a still night, upper-atmosphere turbulence can distort the image in the way shown in the lower image of the Moon.

One of the purchases I made has eased the fraught but absolutely essential process of achieving a precise telescope focus. As purchased, my telescope had a manual rack-and-pinion focusing wheel; classic. I described the frustrations of its use in my earlier post. Thus, when I saw in a sale a motorized drive I could retrofit to the focus mechanism I snapped it up. Now I can alter the focus without setting up vibrations in the telescope and needing to wait for everything to settle down between each adjustment as I hone in on the perfect setting. Moreover, this also made it practicable to use a software tool in conjunction with my Bahtinov mask – also mentioned in the previous post – which provides a real-time quantitative estimate of focus quality.
A frightening level of force was required in order to twist off the original wheel (identical to the remaining wheel at the top of the image; it was glued onto the threaded shaft).

A key weakness intrinsic to my setup has been the alignment of the telescope’s equitorial mount: get it right and the target object stays centre-screen as the mount’s motors compensate for the rotation of the Earth; get it even slightly wrong and the target’s image will drift slowly away. The addition of a polarscope – a Christmas present – ought to have solved the problem. In essence this is a small telescope which fits directly to the mount and is designed to allow the system to be aligned using Polaris, the Pole Star. If the mount is set up correctly, then one can attach the telescope in the confident knowledge that a target object, once located, will be perfectly tracked thereafter. That’s the theory. In practice I had a false start because the polarscope wasn’t perfectly aligned with the axis of rotation of the mount: if the polarscope itself is misaligned then all the other steps in the process topple over. Using a point on a distant neighbour’s TV aerial as a daylight target I have overcome this particular stumbling block, I think. However, there next arises the need to position Polaris correctly in the polarscope’s field of view – it’s not at a point in the centre because Polaris is not precisely at 90º (i.e. above the Earth’s axis of rotation) but on one of a set of concentric circles at a position that varies with the time of day. I got this slightly wrong on the first outing so, although tracking was improved, I still couldn’t risk the longer exposure times necessary for fainter objects. I found an app for my ’phone which takes the work out of calculating Polaris’ position on the target circles so I am hopeful of being able to take another step forward next time the conditions are right.
The polarscope is shown above, fitted to be on the axis of rotation of my equitorial mount. (I've added a blue line to the image in order to highlight this.) Light from Polaris passes through a graticule within the small telescope and the mount’s alignment is fine-tuned to place Polaris on target – the appropriate position having been calculated by an app on my ’phone (see the screenshot on the right). Note that the angle of the polarscope/mount is 51.3º, this being my latitude. On an age-related note, I must add that the contortions I need to force upon my body in order to site Polaris through an eyepiece only 1-1.5 m above the ground and inclined at 51º are entirely non-trivial.

Locating objects remains a challenge unless they are bright or easily spotted in relation to readily identifiable stars/constellations nearby. There are two methods I hope to try in order to ease this problem. One is to place a known star in the centre of my field of view and then set the telescope mount’s celestial coordinates scales (its Right Ascension, RA, and declination, dec – akin to longitude and latitude respectively) to that star’s documented position. Thereafter, I ought to be able reliably to move the telescope to any given new object’s coordinates once I’ve looked them up. The second method I’m keen to try involves attaching my smartphone to the telescope using a suitable holder and undertaking an analogous process using one of the myriad of astronomy apps available. Thus, by sighting a known target in the telescope and then tweaking the alignment of my ’phone so that the corresponding object is displayed centre-screen, I ought then to be able to navigate to any other object using the app. Hopefully, this will take me to some of the fainter objects I might like to see such as a selection of nebulae and galaxies. This is all a work in progress, so we shall see.


Extras, for those readers who just don’t know when to stop 😉

Useful Apps
I mentioned the image capture and processing software I use in my previous post and won’t cover them again here. What might be useful is a list of the apps I use; these are all for the Android operating system, but I’m sure there’ll be identical or similar apps available also for Apple/Windows ’phones:
  • For general navigation around the night sky and to find the location of particular objects I mostly use Sky Map as I appreciate its simplicity of use; I also have SkySafari and SkEye installed and I use them also.
  • The tool I have discovered as an aid to polar alignment is Polar Clock; PolarAligner can also be useful and I have it installed.
  • By far the best tool I have found for identifying features on the Moon is LunarMap HD.
  • Reliable forecasts for cloud cover are a real boon and I am grateful to my ex-colleague Dirk Froebrich, a professional astronomer, for pointing me towards Weather&Radar. This not only shows meteorological radar images in real time but also has the facility to run the clock forward: using the previous 90 minutes of radar data it displays the likely position of any clouds in user’s vicinity for the following 90 minutes. It’s not perfect but, in my experience, it’s pretty good.
Pixel counting
Large objects appear smaller if they are further away. This is as true in astronomy as it is in everyday life. Out of curiosity, I’ve taken a look back at the images I have managed to capture since beginning to try my hand at astrophotography and attempted to gauge their apparent size by counting the number of pixels from one side to the other. I say ‘attempted’ simply because there is a little uncertainty introduced by the optics: a point source of light is likely to show up in more than one of the camera detector’s pixels. At to that the effects of imperfect focusing, especially earlier on in my observations, and the ‘spread’ will become a more significant issue. This is a pretty universal issue, but it does mean that I need to make a ‘best guess’ estimate at the true extent of the object; the smaller the object appears, the larger the effect of any uncertainties.

Both the Sun and the Moon are huge in this respect: I cannot fit more than a fraction of their visible surface in my telescope/camera’s field of view. I do of course have the potential to image in relatively fine detail smaller areas on the surface – like individual lunar craters (down to about 15-20 km across) or the convection cell boundaries on the Sun – and this is a continuing source of delight.

In the same camp come many of the more easily located deep space objects, such as M42 (the star-forming nebula in Orion), M31 (Andromeda) or M45 (the Pleiades); each of them has an apparent size that exceeds by far the effective field of view of my telescope-camera setup. Perhaps counter-intuitively, magnification is not always the most valuable thing associated with a telescope – these deep space objects are a case in point. The key benefit of a telescope for objects such as these is its large aperture: it’s good at ‘gathering light’. Combine the large aperture with a long exposure time (more usually, a computer-managed stack of hundreds⸱⸱⸱thousands of individual exposures) and it becomes possible to reveal details and colours simply not visible otherwise. Whilst my setup is pretty good for viewing planets or individual stars it has too narrow a field of view for the more extensive objects. A larger chip than the 2 Mp one in my astro-camera would help, but I’d ideally also have a second, smaller telescope. It’s not going to happen; thankfully, I have a very long list of fascinating and beautiful objects still to capture for which my setup will do nicely.

Individual stars are at the other end of the scale. Mizar and Sirius both look to be in the region of two pixels in diameter, with Mizar probably appearing a shade smaller. Betelgeuse on the other hand, a red giant, steps this up to roughly three pixels across. (Creating purposefully unfocused images is actually a good way to reveal the differences in star colours.)

If now we turn to the planets in our Solar System the whole distance-apparent size phenomenon really begins to show up. For instance, when Mars was near its closest approach to the Earth during 2020 it appeared to be approximately 39 pixels across – meaning that some surface detail could be seen. This dropped to about half once the separation between us had increased again. Jupiter is almost three and a half times further away from the Sun as is Mars (779 M km and 228 M km respectively), but at about the same time as Mars appeared to have a diameter approaching 40 pixels Jupiter presented at a whopping 140 pixels across. Jupiter is really big. Even Saturn, further away still at 1,434 M km, appears as big as Mars: the planet itself measuring 37 pixels in diameter, with the rings taking this figure up to about 80 pixels. Similarly, Venus weighs in at 42 pixels across whereas the much larger but far, far more distant planet Uranus (2,871 M km from the Sun) shared its blue-green face across a circle of diameter 11 pixels only. If you would like a less casual approach to this question of apparent size I suggest the excellent article available here. Now, it is technically possible to increase the magnification of my telescope+camera from its current value of 48X by interposing a ‘Barlow lens’ between the telescope and the astro-camera. This is a diverging lens and it has the effect of increasing the effective focal length of the telescope, commonly by a factor of two. This would take my setup to 96X magnification. One day I shall try this – but it requires absolutely ideal observing conditions since that increased magnification will accentuate the effects of atmospheric turbulence and one might end up no better off overall. (The likelihood of such ideal conditions falling on a day when both the planet and I are available is not high in the UK.)

Friday, 26 February 2021

Experimenting within the Third Age: ‘flipped lectures’


Four years BP (Before Pandemic) and soon enough after retirement that I could still remember the details of my life in salaried work, I wrote about some of the ‘experiments’ in education I was fortunate enough to be able to pursue (see here for example, final couple of paragraphs). I find that I am still succumbing to this weakness for trying new approaches in science communication. My intended audience is now the membership of the local branch of the University of theThird Age, U3A , rather than undergraduate students, but the passion to share my love of the life scientific remains the same. This post will offer, I hope, a preliminary reflection on a new take on an old theme. The rich vein of serendipity evident to me throughout will also emerge if I can string the right words together.

Cartoon by Jon Butterworth - used with permission.

Although my first 2020 ‘lockdown’ project was actually fairly conventional in many ways – a vaguely straightforward presentation of some basic Physics, albeit tied to objects one might very well find in the home – it did expand my experience of video creation/editing and the use of YouTube. I also learnt how to perch a small whiteboard on the lap and keep it level and in the frame without getting muscle pain and how simultaneously to control a handful of coloured marker pens as they attempted an escape. Such were the skills associated with 2020. A previous blog post provided an overview to the videos and formed, in essence, a Contents Page for the series (you will find it here). Curiously, given the crudity of the setup and my naïvety as a speaker-to-camera (I generally prefer to be behind the lens) this initial blog post has become the most viewed of them all. Having got to the end of my imagination, or perhaps simply my energy – it’s hard to say when in the midst of the stresses and strains of pandemic life – I turned to something more prosaic: making the few surviving recordings of my pre-retirement lectures available via YouTube (see here for details). The potential audience for this particular video series was never going to be large.

In early summer the call came for ideas and proposals for the approaching 2020/21 U3A Autumn & Spring Programme, none of which would take place face-to-face of course. Whilst everyone had ‘made do’, after a fashion, when the pandemic’s grip had first been felt and the second half of 2019/20’s programme had to be cancelled, actually starting a new year without even the remotest possibility of face-to-face meetings seemed worse somehow. My fellow science-based tutors and I ran monthly Q&A/Forum sessions via Zoom on a selection of topics, which went down very well. (I’m glad that I had suggested it earlier in the year, but the truth is that it was my friend and local U3A Science Coordinator, Alan Chadwick, who actually got it up and running successfully; I doubt I could have done so well.) However, this didn’t address the need/demand for the more focused sessions one would normally expect to lead. Some of these could be handled ‘live’ via Zoom, and several tutors took this route, but it didn’t suit the material I had or my presentation style. In particular, it would be difficult to include the demonstrations I try to weave into my sessions. So, without consciously realising it until well after I’d launched myself into my proposed ‘solution’, I turned to the use of an approach I’d tried during the final couple of years of my working life based on flipped lectures. A classic use of this would be to guide students into studying a topic in their own time – I had recordings of lectures I’d delivered in earlier years available for them, together with recommended reading etc. – and to follow this up with face-to-face sessions in which any issues arising from their study could be ironed out. In my ‘locked down’ variant I hoped to translate my pre-existing (face-to-face) sessions into videos that I could upload to YouTube for our U3A members to view as and when convenient. At some point thereafter we’d schedule a Zoom session so that everyone had the opportunity to engage in follow-up discussions and to pose whatever questions might have arisen in their minds.

As with all novel approaches, feedback and proper reflection are important when trying to assess whether the ‘experiment’ has been a success or needs a re-think. Questionnaires and their like do not appeal: too reminiscent of work, and a sure-fire way to dampen enthusiasm. A simpler route would be to assess the overall demand by looking at the numbers registering for the Zoom session and comparing them to those typically associated with a face-to-face session. Then, from the self-selected people who did participate, one might take a look at the questions posed – what folk actually took away as their appreciation of the videos’ content as distinct from my naïve intentions – and any unsolicited feedback. That brings me, at last, to the primary focus of this post.

Although there were decent access statistics for the blog post and evidence that the videos had been viewed, the numbers actually registering and turning up for the follow-on Zoom sessions were relatively small compared to analogous face-to-face sessions. In a ‘normal’ year one might see between 20 and 40 U3A members participating, but there were only a dozen or so at the Zoom sessions. About a quarter of those people sent subsequent feedback by email – all of which was positive I am glad to say. Likewise, the fact that several folk had taken the trouble to formulate and submit questions in advance and/or engaged in the resultant discussion might also be taken as positive. Having said that, a significant number of the questions posed were arguably at a tangent to the actual content of the associated videos; in practice, this doesn’t matter as the topics were fun to discuss: it’s all about science communication after all, and a little diversion can be instructive.

Taken together, it is reasonable to conclude that the format of ‘pre-recorded video plus Zoom follow-up’ was not popular amongst the membership as a whole. However, judging by the feedback received, the brave few who did embrace the experiment seem to have got something worthwhile from the experience.

On that basis, and bearing in mind the large investment of time required, would I seek to offer ‘flipped’ sessions again in the future? Frankly, given the pandemic-derived impetus for the experiment, I sincerely hope the question doesn’t arise! The truth of the matter is that I needed a project on this scale to help fill the year and would almost certainly have proceeded even if I’d known no-one would turn up; moreover, I enjoyed doing it. Thus, every one of the lovely U3A members who did engage with my experiment provided a distinct bonus: each and every one of them rendered my investment worthwhile.

There is a postscript: now that the major part of my pre-existing U3A material is available 24/7 on YouTube I can’t see myself ever presenting it ‘live’ hereafter. This might be considered a negative consequence were it not for the fact that my next U3A project is thereby called into being – to put together some brand new material from scratch; watch this space …


I append below the feedback I’ve received and the questions submitted in advance by email (in italics) and the few notes I pulled together in case my brain ‘froze’ during the live Zoom sessions. There were plenty of follow-up/'live' questions of course, but I’m not sufficiently skilled at multi-tasking that I could jot all those down whilst also answering them, so only a few are listed below. I promised participants that I’d include all this material in the present post but, unless you particularly want to read it, do feel free to stop at this point.


Thank you very much for this excellent course.

I wondered if we would be able to have access to these valuable lectures on the internet indefinitely please, or is there a limited timescale (lifespan?)? [There is no time limit currently envisaged or planned.]

Thank you so very much for fascinating videos and this morning’s session on radiation.

This went so much further than what we learnt as radiotherapy students with the much broader aspect of the subject so well presented. I am new to U3A and have found the course content really stimulating with the standard of lectures.

I will be looking up your other u tube videos.

Thank you for another really fascinating set of videos and the question and discussion session today. I missed out on science education when I was at school and have been trying to fill the gaps ever since.

Thanks for a fascinating Zoom.

Really enjoyed your presentation … I've always felt quite comfortable to have my understanding stretched. Your presentation continued the process especially explaining gravity as fundamental, pervasive, measurable but not yet fully explained. There were good further reading hints too ...

Thank you again for your excellent courses, they make a huge difference.

Your lectures open new doors of learning, & hopefully understanding, food for the soul & life enhancing.

I’m enjoying the videos – at least I appreciate now why you always talk about glasses plural rather than glass!

Thank you so much for this talk – fascinating... I hope I didn’t ask too many questions, but I was riveted.

I have just finished watching your last video on glass. Thank you for putting so much effort into the presentations. As a complete novice I found the material fascinating and although some of the chemistry was above my head, I now have an appreciation of what a versatile and valuable material glass is. It must have been difficult to retire from such interesting work when your passion for the subject is still there!

Questions posed

1) Radiation

What happens to nuclear waste, and will it be a problem long term, or will we have mastered it safely?

It’s a big question, and a serious one. One of the ways forward for the high level waste that needs long-term storage is to incorporate it into a glass. Glasses may be designed to be stable for thousands of years; if in turn these are enclosed in outer protective layers and then carefully stored deep underground away from water courses we are likely to be well protected. There are some exceptionally talented people involved in the research behind these vitrification processes (e.g.  Sheffield) See also Remember, there is always risk – it’s a matter of selecting the optimum way forward on the basis of all available reliable evidence.

Polonium 210 - one of my friends stayed at the London hotel where it was believed Litvinenko was poisoned. About a week later, he was contacted by police at his home in Naples, he was asked to submit to a medical, luckily all clear. Given polonium 210 is an alpha emitter, and apparently was transported in a flask, how could radiation leak out to contaminate hotel rooms, aeroplanes?

Po-210 appears to have been introduced via a cup of tea drunk by Alexander Litvinenko during a meeting with former associates. Po-210 was finally confirmed as he was nearing death precisely because it is an alpha emitter and therefore almost ‘invisible’ to a standard Geiger counter: it took the involvement of specialist scientists with more sophisticated equipment. Once they were involved the trail of contamination could be followed back in time. The two poisoners had contaminated themselves and shared that with items and people around them; obviously, the table setting in the place he’d drunk the tea was also contaminated – especially the dregs in the tea cup. Po-210 is only dangerous as a source of radiation once inside the body, where the alpha radiation kills living cells as the metal is carried around the body (including to the bladder – causing more contamination).

If you keep a glass vase coloured with uranium salts long enough (700 million years!) will it lose some of the green colour.

On the face of it the answer is “yes”, but bear in mind the fact that the decay chain for U-238 takes us, via intermediaries, to U-235 – an isotope which is of course chemically identical to its parent isotope. All radioactive isotopes of U have an associated decay chain but U-238 forms 99.3% of the element, which is why I focused on that in the slides. Thus, given that the half-life of U-238 is about the age of the Earth, the colour will fade – but maybe not at a rate one would notice ;-)

How is a GM tube made sensitive to alpha radiation?

The key is to ensure that the end window is thin and made from something of low density (i.e. relatively few atoms to get in the way). Whilst gamma rays and the higher energy beta particles will have no problems entering the gas within the thin-walled tube, alpha particles will. End windows of mica or occasionally even beryllium (element 3 in the periodic table, a metal – extremely toxic) are commonly used. Thus, whilst it’s harder to detect alpha radiation with a G-M tube it’s far from impossible with the right set-up.

In your demonstration with the various radioactive minerals and aluminium and lead absorbers you used lead wrapped in plastic.  Is there a significant hazard in handling lead?

Lead is toxic; it affects the nervous system and has a tendency to stick around in the body for a long time; it’s an example of heavy-metal poisoning. Lead water pipes began to be phased out decades ago, and lead additives in fuel were removed during the ‘90s and outlawed in the EU in 2000. Lead may be absorbed through the skin – hence the plastic bag.

At a few places in the videos there is a bit of blurring between X-rays which are electromagnetic radiation but are not produced by radioactive decay and gamma radiation also em radiation but is. In everyday language the word radiation is applied to many situations in which the radiation referred to is not nuclear.  I think I understand this but wonder if all your audience do.

This is a good point. ‘Radiation’ may be used of more than one phenomenon; however, at the core of our present topic is the word as applied to radiation having its origin within the nucleus of unstable atoms. (The cross-over in the video was, I think, associated with making the point that gamma rays are themselves electromagnetic in nature – so, a higher energy analogue to x-rays, which in their turn are higher energy electromagnetic radiation than the colours we perceive in a rainbow and so on.)

Comment: The nucleus changes during radioactive decay: it’s not that a part of the nucleus is ‘thrown out’ but rather that there’s a change to the nucleus which results in a newly-created entity leaving with the excess energy of the change. (e.g. a nucleus splitting into two, with one being an alpha; a neutron decaying into a proton and an electron, with the electron leaving; a whole nucleus shuffling down to a less excited state by emitting a gamma ray photon)   

2) What’s so special about the Earth?

How come we’ve landed up in a highly desirable area (estate agents) as a third area out from the Sun?

We are able to ask questions like this because we live on this particular planet – we wouldn’t be around to pose such questions from Venus or Neptune. There is a major philosophical theme here.

What pulled the plug out to get rotation going. Why does everything have to rotate and not stay still.

Early universe: once cool enough for atoms to form (H, He) they were at high T and therefore moving fast; more cooling meant that they could clump together (gravity), but all it takes is a tiny instability in one place to begin to affect all other places around. Locally, there are also the effects of collisions.

Gravity is a complete mystery to me. Everyone takes it for granted. I can’t. Want to know more. What is this prime force and how did it start?

The classic description by Newton tells us that it is a fundamental property of anything that has mass: it’s intrinsic to the universe. It’s not a strong force and may be dominated by other effects at short distances (e.g. magnetic, electrostatic) but for large masses and over long distances it becomes the boss. Einstein’s theories of Special and of General Relativity offered us a model of space-time which is curved/distorted and in which we ‘fall downhill’. We feel our weight because the Earth’s surface is preventing out fall down the slope towards its centre. A key observation was that the path of photons from distant stars is bent as it comes close to the Sun. Listen to ‘The Curious Cases of Rutherford and Fry’. (It was at this point in our Zoom session that we engaged in an extended discussion on the myth of the ‘lone genius’.)

Who names and accepts planet names?

Many different cultures named them independently. Earth from 8th century Anglo Saxon word ‘Erda’ meaning ground/soil; Sun – from middle English sunne (Chaucer’s Canterbury Tales). Planets names from Roman myth. Stars often named from Arabic Polaris has also been known by the names Alruccabah, Angel Stern, Cynosura, the Lodestar, Mismar, Navigatoria, Phoenice, the Pole Star, the Star of Arcady, Tramontana and Yilduz at various times and places by different cultures in human history. Some things are named after their discoverer: Kuiper belt, Oort cloud. All objects, bodies and surface features now overseen by the International Astronomical Union.

Termination shock? Heliosphere: You mentioned it is like a 'shock wave'. Could you explain a bit please? Is it just the name of a region that has certain properties or is there something physical there? How much of a barrier is it? Does it behave as a partial two-way barrier? As I understand it, we do get some cosmic radiation incident on Earth, as well as particles from the solar wind.

The term came from a diagram I had inserted into a slide. Heliopause – when the influence of the Sun (magnetic field, solar wind) no longer dominates over galactic forces. This might be thought of as the extent of the Solar System. (The Oort cloud is further out – held there in orbit by the Sun’s longer-ranged gravitational attraction.)

Can you say more about the creation of the molten core and solid centre of the Earth?

Early stage of solar system the planets formed from the gravity-driven) aggregation of gas, and small particles. As ever-larger clumps collided and coalesced the energy of the collisions heated everything up – everything from Mercury to Mars began life as a ball of molten material which then slowly cooled and began to solidify. (The same is true of the outer planets, but they became large enough that their gravity could hold on to a lot of gas as well.) In fact proto-Earth was re-melted in a hugely violent collision with another proto-planet – estimated to be about the size that Mars is today) – out of this collision came the debris which formed our relatively huge Moon.

Is there anything similar on Venus, Mars or the Moon?

Mars is relatively small compared to the Earth, so it has cooled faster but it does still have a molten core – however, perhaps due to the size, there isn’t the motion required to generate a magnetic field. Venus is more similar to Earth in size, but it rotates very slowly (one day on Venus is almost 4 Earth months – in fact its day is slightly longer than it year) and so we don’t get the motions required for either tectonic plate movement or a magnetic field.

I was particularly interested in the equation relating to the likelihood of developed life on a planet in other galaxies. Mostly obvious and difficult to get your head around the infinity of infinities of possible galaxies but what I had never thought of before, and should have, is the fact you have to take time into account too.  That is that life could have existed and finished or maybe not started yet.  So not only the number of galaxies and planets to consider but also the coincidence of life occurring on one at the same time as a life form such as humans on earth is capable of making contact.  Not to mention that contact must be recognisable by the other.

Yes indeed. It sounds as though you know of the Drake equation (see for example, noting the ‘guesstimates’ required) and the Fermi Paradox (e.g.; I am content to await evidence.

Age of the Universe: You mentioned about 14 billion years. Has this been calculated by working backward to the Big Bang and based on current observations of the expansion rate? Or, is the calculation much more sophisticated than that? Detail of the maths not required!

We can get a ‘guesstimate’ of the age simply by reversing the clock on what we currently observe, but there is a need to be a little more subtle than that. We now know, on the basis of our expanding body of observations (which have, in effect, taken us back to within a few million years of the Big Bang), that the rate of expansion was different in earlier epochs. Consider, for example, the fact that when everything was closer together the gravitational attraction between masses was greater – so more tendency to slow the rate of expansion.   (We engaged at this point in a discussion of what the ‘observable universe’ means, introducing Dark Matter/Energy etc.)

Earth's Magnetic field: I can see why relative motion between a solid ferrous inner core and molten (ferrous?) outer core will produce a magnetic field, but how do they think these relative motions might have come about?

Actually, this is a complex problem: a fuller picture is that the solid inner core (about the mass of the Moon, but almost all Fe) rotates eastward a little faster than the Earth overall (it laps the rest of the planet once every 400 years or so) – it gets this ‘push’ from the action on the Fe of the Earth’s geo magnetic field. This means that the fluid outer core is travelling. In effect, in a westward direction compared to the inner core. The magnetic field in created by complex convection currents in the fluid outer core, and its relative motion compared to the inner core is what leads to the tendency for the poles to drift over time. The Fe sank to the centre early on in our history; the inner core is gradually growing as the Earth slowly cools

Does the vortex in the bath go the same way in the southern hemisphere as here? Do Runner Beans twine the same way in Australia as they do in UK ?  (ref Flanders and Swann, The Honeysuckle and the Bindweed)

Yes, no difference – urban myth. They follow the Sun, wherever they are.

And a bit about Sun Dogs which we see occasionally during our sundowners on Whitstable beach, and the Northern Lights which we were amazed to watch from a beach on Sheppey one night a few years ago.

Refraction through hexagonal ice crystals in the upper atmosphere; 22º to either side of the Sun, sometimes with an arc (type of halo).

3) Glass: a look inside – science, technology and art

Please discuss de-colouring agents a little more, specifically Manganese dioxide added to make greenish glass clear.  I did research several years ago on 19th Century lavender window glass.  Manganese was added to glass to remove the green tinge to glass.  But with oxidation/ exposure to sunlight, the glass windows turned lavender.  You can still see some of these lavender windows in vintage buildings…. Back Bay homes in Boston, Walmer Castle in Kent, Sanssouci Palace in Potsdam, Germany.   Occasionally I’ve seen a window or two of the same purple glass in old buildings which haven’t been “restored”.  I would love to hear your opinion about this effect.  

We can purposefully add a metal in order to induce a colour, but the raw ingredients may contain metal impurities; a common one is Fe. Contaminants need only be present in tiny amounts to introduce a colour. The green tinge is often associated with Fe and it’s possible to mask this by using a metal offering complimentary colours (e.g. Se – pink; Co – blue). Thus, a better term would be neutralising or counter-dyeing. These will reduce the overall light transmission, but can lead to a greying of the glass.

High purity raw materials will help, and it’s often important to control the oxidation state of the metal (controlling the proportions of O present in the furnace) and the heat-treatment of the melt.

Mn was popular because it has a range of oxidation states and the highest, Mn(VII), absorbs green light but transmits to either side … generating shades of purple; Mn also helps ensure the iron present is in the Fe(III) state (Fe3+), which imparts only a pale yellow to the glass and is therefore easier to neutralise. The action of UV light imparts sufficient energy to alter the oxidation state of the metals present – including Mn – and so initial colours may alter over time. Indeed, the use of artificial UV lighting in so-called ‘purpling boxes’ will accelerate such effects.

Mn ceased to be used in this way from WW1 as the major source at that time was Germany.

I do stained glass and glass mosaics as a hobby.  Some glass breaks evenly, and some breaks irregularly. Please discuss this effect a little further.         

From Léonie Seliger, head of the Cathedral’s Glass Studio:  “Some glasses are simply harder than others due to their composition. American opalescent glasses for instant are really difficult to cut; the feel under the glass cutter is almost as if you are trying to get through very hard plastic. You don’t hear the nice musical sound a glass cutter’s wheel makes on normal glass (the sound the French call ‘le chant du diamant’), and the cut is almost invisible on American opalescent glass.  It’s hard to break, and I find the break runs away from the cut more often than on normal glass. A major suspect for glass breaking irregularly is poor annealing. Glass that retains internal stresses (because they were not given enough time to dissipate as the glass cools down) will jump unexpectedly, sometimes even while you are still running the glass cutter over it. That can give you quite a start, wastes glass, and makes me very uneasy about using glass from a poorly annealed sheet. It’s alright if you cut it into small pieces, but I would steer away from using large pieces from a stressed sheet in a window. They could fracture when experiencing a measure of strain that well-annealed glass would withstand without any problems.”

My perspective:   I think there are two principal length scales: atomic and mesoscopic.

On the atomic scale it's the chemical bonding that dominates of course; composition is the key factor here. Some of the hardest, most brittle glasses we ever made were phosphates containing multivalent rare earth metals; that was all down to their atomic-scale structure and the bonding that went with it. Move to silicates in which the 3D [SiO4] tetrahedral network is heavily disrupted by alkali metals and these physical properties alter considerably. At longer length scales, the effects of inhomogeneities come into play: variations in local density and/or composition, which are highly likely to have their origins in processing (- batch mixing, heating profile and annealing). I suppose one might tack on a third scale associated with microscopic/macroscopic factors like inclusions in the glass, including bubbles, and variations in thickness - but that's beyond my understanding.

Does this have possibilities ?

“in which they took balsa wood and removed its lignin – a component of wood that gives it [compressive] strength and colour [cell walls and related tissue; cellulose provides tensile strength]. Acrylic, which is non-biodegradable and water-repellent, was introduced into the remaining tissues where it filled both the tiny pores left by the removal of lignin and the hollow vessels that carried water in the tree. That, said Montanari, not only helped maintain the wood’s structure but also restored its strength and improved its optical properties. The upshot was a frosted-looking wood-based material. In the latest work the acrylic was mixed with another substance called polyethylene glycol, which permeates wood well.”

How do they make these?

Millefiori Glass Paperweights; it’s a bit like the glass version of ‘Brighton rock’, but perhaps easiest to watch:


I was quite surprised to find how recent plate glass windows are.

Yes, the really smooth, highly transparent and toughened variant arrived during our lifetime. There were large sheets before that, but of relatively poor/non-uniform quality. For details see: ‘Float: Pilkington’s Glass Revolution’ by David J. Bricknell (ISBN 978-1-905472-11-6, 2009.

Fascinated by the idea of neutron diffraction.  I'm OK with wave particle duality, de Broglie wavelength, electron diffraction etc so am fine with the principles, but how do you separate the neutrons, how can they be accelerated to required energies, and how are they detected once diffracted?   

You’ve done the hard part; coming to terms with the fact that sub-atomic particles can behave like waves and thus be used to perform diffraction experiments is the key step; thereafter it’s fairly conventional.

Neutrons streaming from a nuclear reactor, for instance (e.g. emerge with a range of speeds – i.e. energies, or wavelengths – so we can use a suitable crystal as a monochromator via Bragg’s Equation: hey presto we have a monochromatic beam of neutrons heading towards our sample. Of course, we first have to define a ‘beam’, but this can be done using standard shielding material to make a collimator. Post-sample detection as a function of angle is also relatively straightforward since neutrons, though uncharged, will interact with matter; a common form of detector uses scintillation: the neutron is absorbed by an atom’s nucleus – for example, Li, doped into a glass, is highly efficient at absorbing neutrons and when it does so a gamma ray is released which may then be detected relatively straightforwardly.

One can separate the neutrons generated by an accelerator source into their energies, or wavelengths, simply by timing their flight between two fixed points (using small detectors that sample the passing beam). The detectors will not now sort what emerges from the sample as a function of angle but rather on the basis of their time of flight (e.g. where I worked for a few years in the early ‘80s - I’ve posted on this a few years ago on my blog: ).

How long does it take to grow a ‘Venus Flower Basket’ skeleton?

Because euplectella aspergillum is found at such great depths the information about its active life is limited. This is an animal that protrudes from the rocky ocean bottoms, also making it a benthic (bottom dweller) animal. Details of reproduction of E. aspergillum are not known, therefore we can only assume it’s similar to the normal forms of reproduction in related sponges.

How much stronger is glass in compression than in tension?

The compressive strength of glass is extremely high: 1000 N/mm2 = 1000 MPa (for comparison, atmospheric pressure is 101 KPa – i.e. about 10,000 times smaller). This means that to shatter a 1 cm cube of glass, it requires a load of some 10 tonnes. Its resistance to tensile stress is significantly lower:  40 MPa (N/mm2) for annealed glass and 120 to 200 MPa for toughened glass (depending on thickness, edgework, holes, notches etc). (by the way, glass is a perfectly elastic material: it does not exhibit permanent deformation, until breakage. However it is fragile, and will break without warning if subjected to excessive stress.)