Browsing Tag: paul shillito

    Nuclear powered Planes, Trains and Automobiles
    Articles, Blog

    Nuclear powered Planes, Trains and Automobiles

    August 29, 2019


    To quote L.P. Heartly’s 1953 book “The G-Between”, “The past is a different country they do things differently there”. That’s definitely something that could be applied to our attitude to the newly discovered atomic power in the late 1940s and 50s. Within just a few years after the first atomic bombs have been dropped on Japan it seems as though the atom would be the cure-all for all our energy needs with power “too cheap to meter” as was once quoted. Whilst ships and submarines of the leading navies went nuclear, companies put forward ideas for atomic powered planes, trains, yes and indeed automobiles. The first idea of using a radioactive power source for a car in this place radium dates back to 1903 and in 1937 Further analysis of a concept thought that it would need 50 tons of shielding to protect the driver. But with the development of small scale self-contained reactors for ships and submarines in the 1950s the idea of atomic cars was back on the table. In 1958 Ford unveiled a uranium powered concept car called with a typically 1950s futuristic name the “Ford Nucleon” in essence it was a scaled down submarine reactor in the back of the car which would heat stored water into high-pressure steam which will then drive two turbines. one to power the wheels any other to drive an electrical generator. Ford engineers anticipated that it would have a range of around 5,000 miles before you would need to nip into your local Ford dealers and have uranium core swapped out for a new one. The passenger compartment was situated over the front wheels allowing for the bulk of the reactor and a heavy shielding to be more centrally placed and keep you as far away from the reactor as possible. As was the optimism of the 1950s and the naivety of the general public, it was believed that nuclear power would eventually replaced petrol power in the future. Something which doesn’t really bear thinking about if you imagine a car crash returning to a major nuclear incident. Ford only ever made scale models of the Nucleon as they anticipated the miniaturization of the reactors and lighter shielding materials. aAs these didn’t appear and with the increasing public awareness around radiation and nuclear waste, the project was dropped and the models ended up in the Henry Ford Museum in Dearborn, Michigan. Now if you thought the Ford Nucleon was a bit far-fetched, just look at the French Simca Fulga, a 1958 concept car designed by Robert Opron. This was meant to show how cars might look in the year 2000 powered by a nuclear reactor with voice control and guided by radar and an autopilot that communicated via a control tower. At speeds of over 150 kilometers per hour, two of the wheels would retract and it would balance on the remaining two with the aid of gyroscopes. Also in France in 1957-58 to the Arbel Symetric was proposed with either a gas generator or 40 kilowatt nuclear reactor called the “Genestatom”. This would use radioactive cartridges made from nuclear waste however, the French government disproved the use of nuclear fuel in cars and the development that was stopped. Of all the land-based forms of transport trains were the most likely candidates to be nuclear powered especially those travelling across large areas where electrification have not been done. In the U.S. a nuclear-powered locomotive called the X-12 was put forward in a design study for the Association of American railroads and several other companies by Dr. Lyle Borst, one of the early members of the Manhattan Project which had created the first atomic bomb. The X-12 would use liquid uranium-235 oxide dissolved in sulfuric acid in a three foot by one foot container surrounded by 200 tons of shielding. The reactor would then create steam to power turbines to drive four electrical generators. These would create the 7000 horsepower of electricity to power the motors. This was about the same as a four loco unit with each loco having 1,750 horsepower but would only need refueling once a year although it did cost about twice the price of a four loco unit. The whole locomotive would be 160 feet long and weigh 300 tons and would have an articulated rear section where all the cooling radiators and condensers would be placed. But the cost of developing such a locomotive without government subsidies and the highly enriched Uranium-235 together with the huge cost of liability insurance in case of an accident made the X-12 uneconomical and it was not pursued by any of the train companies. However in 1950 Soviet Russia money was not the same issue as it was in the U.S. In places like the North Far, East and Central Asian desert it was thought that electrification of newly built railway lines was not advised at the time. So in 1956 the Ministry of Transport for the USSR came up with a plan to make super-sized nuclear trains which would run on tracks three times the width of normal ones. The train could be used in areas where there was little in a way of supplies or infrastructure to support normal railways and whilst it was stopped it could also serve as a small power station and generate electricity and hot water heating for weeks or months if required in remote locations. The train would use the super-sized tracks to accommodate the extra weight of all the radiation shielding but whilst that might be enough to protect the drivers and passengers in front and behind the loco, the sides and the underneath might still irradiate the environment. The other problem is that infrastructure like embankments, bridges, tunnels would all have to be enlarged for the extra wide track over thousands of miles in some of the world’s coldest and toughest environments. This and the radiation problem put an end to the super-sized Soviet nuclear train. And so we finally come to planes. The idea of nuclear power planes in the 1950s was that bombers carrying atomic bombs could be kept permanently on standby flying around the Arctic circle for days or weeks at a time without the need to refuel and ready to attack at a moment’s notice. Both the U.S. and the Soviets worked on nuclear powered planes. There were two methods of making nuclear powered jet engines. One was simple and lightweight and this was the direct cycle engine. In place of a combustion chamber, the air comes into the jet and in his directed through the reactor core, this would cool the corel and heat the air which will then be directed back into the jet exhaust as thrust. The problem with this method is that if the shielding is not good enough then the air could become irradiated so you would leave a trail of radiation behind a plane. The second method used an indirect way of linking the air via a heat exchange to the reactor, so that the air could not get irradiated but it also meant a lot of extra heavy plumbing and complexity which would make the plane heavier and slower and more susceptible to attack. The biggest problem that both the U.S. and the Soviet faced with nuclear powered planes was getting enough thrust from the engines and the extra weight of the shielding to protect the crew. While no actual flights were made by nuclear powered engines in the U.S. they did use a highly modified Convair B-36 peacemaker with a real reactor to test a distributed method of radiation shielding. By the time president Kennedy was elected in 1961, the direct cycle engine developed by General Electric was regularly making high levels of thrust under nuclear power in ground-based tests. Work on what was to be the WS-125 long-range nuclear bomber had continued from 1954 1961 but when new intelligence from the U-2 spy planes and satellites showed the the Soviets had much less in the way of bombers when the U.S. thought and that the Russian nuclear power bombers just didn’t exist, Kennedy scrapped the WS-125 bomber program in favor of more missile submarine development. But after the fall of communism in Russia in the late 1980s it was revealed that the Soviets had actually flown a nuclear-powered version of a Tu-95 “Bear” long-range bomber 40 times between 1961 and 1969. Under pressure in believing that the Americans were close to creating a nuclear bomber the Soviets flew tests with direct cycle nuclear powered engines. However the engines were inefficient and spewed radiation into the air. The plane also had to fly with no shielding to protect the crew otherwise it would have been too heavy to take off. Although it worked within three years some of the crew had died due to the radiation exposure on the test flights and this was the real Achilles heel of the nuclear power planes. Whilst the engines may work the shielding was still a major problem. Today we could with new technologies which have arisen since the 1950s build smaller and safer nuclear reactors. We’ve already done this a spacecraft like the Voyager probes of the 1970s which are still going in deep space and for Landers like the Mars Curiosity rover 2012. Already nuclear-powered surveillance drones that don’t need crew or heavy shielding that could fly for weeks or months and nuclear powered trains in Russia are being proposed once more. So the future may well glow bright with portable nuclear power and as always please subscribe, rate and share.

    Ultra High Speed Cameras โ€“ How do you film a tank shell in flight or a Nuclear bomb test?
    Articles, Blog

    Ultra High Speed Cameras โ€“ How do you film a tank shell in flight or a Nuclear bomb test?

    August 15, 2019


    In my last video I looked railguns, now
    whilst I was reviewing the footage I started wondering how they filmed the
    projectiles in flight. These are not the typical sort of high-speed camera shots
    where you see a bullet hitting a target for example, these are tracking the
    projectile from the barrel down the firing range. From the footage it looks
    like the camera is panning around and following the projectile but that would
    be impossible, the tank round is traveling at over 1,500 meters per
    second and would normally look like this. For all of you out there who said it’s
    done with mirrors then you are absolutely correct.
    It works by having a computer-controlled high-speed rotating mirror in line of
    sight of a high-speed camera. The speed of the rotation of a mirror matches that
    of the object being followed so the faster the object is traveling like a
    railgun projectile the faster the mirror would turn to keep up with it. Using this
    method the object can be kept in the field of view for a hundred meters or so
    or about ninety degrees of the mirrors movement. In this example the tracker 2
    from specialized imaging you can see the mirror and to its left where the camera
    is. Because the mirror is computer-controlled it can be programmed
    to follow objects that accelerate even linearly or non linearly. Now rotating
    mirrors aren’t new in fact they were some of the first high-speed cameras and
    are still some of the fastest in the world capable of up to 25 million frames
    per second and were used to record atom bomb blasts. During the Manhattan Project to develop the first atomic bomb they required cameras that could record the
    first few microseconds of explosion. In order to create a nuclear chain reaction
    and achieve critical mass a baseball-sized piece of plutonium had to
    be compressed to about half its size. This was achieved by using an array of
    focused high explosive lenses surrounding the plutonium core. In order
    to make it work effectively the explosives 32 of them in all had to be
    triggered within one microsecond, if any were delayed then the compression
    of the core would be unequal and the reaction would even be much less or may
    not even happen at all. Using a super high-speed camera it will
    be possible to see how effective the explosive lenses had been just a few
    microseconds after detonation. At the time the fastest cameras were Fastax
    cine cameras and could achieve around 10,000 frames per second or one frame
    per hundred microseconds, this still wasn’t fast enough though. The first
    high-speed rotating mirror camera was the Marley, invented by of a British
    physicist William Gregory Marley, the Marley camera used a rotating mirror an
    array of lenses inside a curved housing each focused onto a single piece of film
    around the edge of the case. This could record a sequence of up to 50 images
    onto 35 millimeter film at a 100,000 frames per second. But by the
    time of a Trinity test it was outdated and too slow to record the ultra quick
    reaction in the plutonium core. Head of the photography unit Julian Mack said that
    the fixed short focus and low quality of the lenses would probably have made the
    Marley camera pictures useless. He helped develop the Mack Streak camera
    which had a 10 million frames per second limit, thats one frame every hundred
    nanoseconds. By the 1950s Harold Edgerton had developed the Rapatronic camera
    the name coming from Rapid Action Electronic this used a magneto-optic
    shutter which allowed it to have an exposure time as short as 10 nanoseconds
    thats ten billionths of a second. This was first used with a hydrogen bomb test of
    Eniwetok Atoll in 1952. However they only took one image so to
    see the first few microseconds of a nuclear detonation up to 10 were used
    in sequence with an average exposure time of three microseconds. The images
    were then played back and blended together to give the impression of a
    film. For the British nuclear tests the Atomic Weapons Research
    Establishment created for C4, a huge rotating mirror camera weighing in at
    around 2,000 kilograms and was the fastest in
    the world at the time. This could record up to 7 million frames per second who
    have a mirror rotating up to 300,000 revolutions per minute and recorded the
    first British atom bomb test on the 3rd of October 1952. The rotating mirror
    cameras are still in use today but now they use highly sensitive CCDs
    to replace the filmstrip. The Brandaris 128 and Cordin model 510 have 128 CCD’s and a gas driven turbine mirror driven by helium to achieve up to 25
    million frames per second at a resolution of 500 x 292 pixels for the
    brand iris and 616 x 920 pixels of recording. At 25 million
    frames per second the mirror itself is running at 1.2 million
    revolutions per minute that’s 20,000 revolutions per second so fast of the
    atmosphere inside the camera is 98% helium to reduce for friction and the
    pressure waves that would occur in normal air. And so onto something I think
    you may find rather interesting. It’s not the fastest camera in the world but this
    one is or it was at the time in 2013 the fastest real-time tracker of a moving
    object and was developed by the Ishikawa Oku Lab at the University of Tokyo. Here
    it is tracking a ping pong ball and keeping it in the center of a frame all
    times both during a game and when it is being spun around on a piece of string.
    It does this by moving two mirrors in front of the camera one for the X
    movement and Yvon for the Y movement it then uses software similar to face
    tracking software to provide feedback to control the mirrors with a response time
    of just one millisecond. It can also be used to control a projector and in this
    scene it’s projecting an image onto the ping-pong ball whilst it’s been bounced
    on the bat, you can see the little face change on the ball at the top of its
    travel. So anyways I hope you enjoyed this look at some of the equipment behind some of the most amazing footage recorded to date
    these aren’t the fastest cameras in the world now but it’s still amazing to
    think what can be achieved by mechanical means. So as always thanks for watching
    and don’t forget we also have the curious droid Facebook page and I would
    also like to thank all of our patrons for their ongoing support and if you
    would like to support us then you can find out more on the link now showing so
    thanks again for watching and please subscribe, rate and share.

    What’s Wrong with Earth’s Magnetic Field?
    Articles, Blog

    What’s Wrong with Earth’s Magnetic Field?

    August 13, 2019


    From satellites malfunctioning to
    strange flashing lights seen by astronauts on the International Space
    Station there’s something odd happening in
    certain parts of space close to Earth and the cause of these effects could
    have wide-ranging impacts on all of us in the future. In February 2016 the
    state-of-the-art Japanese x-ray Observatory Hitomi launched atop of
    Mitsubishi rocket into a circular orbit 570 km above Earth
    its mission was to study high-energy processes like black holes and
    supernovae in clusters of distant galaxies but after just a month of
    successful operations the Japanese space agency JAXA announced that on the 26th
    of March they had suddenly lost communications with Hitomi, worse the
    satellite was in an uncontrollable spin, attempts by the internal guidance system
    to correct the rotation only made the problem worse,
    turning it faster and faster until the solar panels broke away leaving Hitomi
    without power. On the 28th of April 2016 after a month of attempts to try and
    re-establish communications JAXA declared the $273 million dollar Space
    Observatory lost but what was the cause of Hitomi’s demise. The answer lies in
    the peculiar pattern of radiation which surrounds the earth discovered in 1958
    by the first US satellite. The American physicist James Van Alen proposed
    fitting Explorer 1 with a Geiger counter attached to a small tape
    recorder to measure radiation levels above the atmosphere.
    Sure enough Explorer 1 observed distinct patterns of radiation during
    its 111 day mission these became known as the Van Allen belts donut-shaped
    fields of energized particles emitted from a Sun and trapped by the Earth’s
    magnetic field. In fact the radiation belts were
    discovered by Sputnik 2 just over two months before the American Explorer 1,
    however due to sputnik’s trajectory over foreign territory and the secrecy around
    the mission, it meant that when the signals were picked up by the
    Australians the two sides refused to talk to each
    other and the Australians refused to pass the data on. When the Soviets
    finally got their hands on the data they mistakenly thought that the spikes in
    the radiation were a result of a recent solar flare and by the time they figured
    out the real reason, the American Van Allen team had announced the discovery.
    There are two main concentrations of radiation surrounding the earth an inner
    radiation belt made up of mostly energized protons that peaks at around
    2,900 km and an outer radiation belt mostly
    consisting of electrons of peaks at around at altitude of 16,000
    kilometers. The Van Allen belts are responsible for Aurora’s such as the
    northern and southern lights where they meet the Earth’s atmosphere around the
    magnetic poles and channel charged particles into the upper atmosphere it’s
    here that they react with atmospheric gases and form the colorful displays. The
    most common Aurora color a pale yellowish green is produced by oxygen
    molecules located about a 100 km above the earth. Rare all red
    Aurora’s are produced by high-altitude oxygen at heights of up to 330
    km and nitrogen produces blue or purplish red Aurora’s. But orbital
    surveys revealed another large area where strong radiation dips down to just
    200 kilometers above the Earth’s surface over a continent-sized area above the
    Atlantic coast of South America. This is known as the South Atlantic anomaly or
    the cosmic Bermuda Triangle and it’s this that is the cause of a strange
    effects on both the human body and robotic spacecraft. In the 1960s and 70s
    spacecraft in orbit relied upon early computer chips which had much larger
    transistor sizes and were less sensitive to the bombardment of charged particles
    this electronic toughness was especially important for the Apollo missions which
    had to cross the Van Allen belt and leave for protection of Earth’s magnetic
    field. But during the missions the astronauts reported seeing spots and
    streaks of light which continue to flash even when they close
    their eyes. These visual effects are called phosphenes and a 2006 survey of
    astronauts found at 47 out of 59 respondents had experienced them, some
    even reporting the lights had disturbed their sleep.Although humans haven’t
    crossed the Van Allen belts since December 1972, all spacecraft that passed
    through the South Atlantic anomaly experienced an increased dose of
    radiation of up to a thousand times the usual levels for low Earth orbits. To
    deal with this the International Space Station is fitted with shielding that
    protects astronauts from some of the increased radiation when passing through
    the anomaly. Even so, the crew on the ISS each wear personal dosimeters and
    electronic devices often suffer lock ups or failures known as single event upsets
    these are more common with consumer grade electronics like laptops which are
    brought with the crew which are built to less demanding specifications than space
    or military-grade components which are radiation hardened. Since the space
    shuttle missions, astronauts have reported laptop failures over the South
    Atlantic, fortunately the fix is usually a simple
    reboot. One way to shield against high-energy charged particles is to use
    water good, yep, old-fashioned water makes an excellent shield against ions electrons
    and protons due to its high hydrogen content. Some astronauts have even taken
    to lining their sleeping areas with water bags and even their dosimeters to
    reduce their total exposure number to get longer and more frequent missions
    but of course at their own risk of greater radiation exposure.
    However the Space Telescope’s like Hubble depend upon highly sensitive
    electronics to capture light from distant galaxies. As Hubble passes
    through the South Atlantic anomaly operators routinely switch off imaging
    sensors to avoid damaging them, however one of Hubble’s cameras the Wide Field
    Camera 3 can still function in this portion of the orbit but the image is
    still register for characteristics speckling of a radiation artifacts.
    Hitomi’s light-gathering sensors were ultimately found to be the cause of the
    failure cascade that led to the loss of Japanese satellite. When passing through
    the South Atlantic anomaly the star tracker which is used to orientate the
    craft with a known fixed point suffered a series of glitches causing the
    satellite to switch to backup gyroscopic instrumentation, which also then failed.
    Being on the opposite side to earth from Japan and out of direct communication
    with Mission Control the error was only discovered after it was too late. To
    avoid similar losses in future space agencies are pursuing two strategies,
    first to better understand. The South Atlantic anomaly itself and to make
    satellite electronics more resilient to the effects of radiation as they pass
    through the danger zone. But the South Atlantic anomaly is not a stationary
    phenomena, since 1958 scientists have observed that the anomaly has been
    increasing in size and is also gradually moving northward and westward this is in
    line with both the North and the South Poles which are constantly moving around.
    Over the last 150 years the North Pole has moved over a 1000 kilometers and
    has recently accelerated to 40 km a year and the magnetic South
    Pole is now over a 1000 kilometers away from a geographic South Pole. But
    there isn’t just one anomaly either, in fact there are many smaller ones
    scattered around the globe. The European Space Agency is currently mapping these
    changes with the swarm mission which launched three small satellites into
    polar orbits. Two of the swarm satellites orbits side-by-side at an altitude of
    450 kilometers and one orbits at a higher altitude of 530 kilometers swarm
    has tracked fluctuations in the Earth’s magnetic field since 2014 and the data
    has revealed strange weather systems deep beneath our feet that could be to
    blame for the spreading footprint of a South Atlantic anomaly. Beneath 2,900
    kilometers of crust and mantel our planets core is in two parts, a central core of solid iron nickel alloy about the size of a moon and at 4,500
    degrees centigrade that’s as hot as the surface of an
    orange star like Arcturus and yet solid due to the immense pressure of 50
    million pounds per square inch bearing down on it.
    The second outer core is molten and about the size of Mars and fully
    envelops the inner core. As the planet spins it’s the movement of this molten
    metal outer core which is responsible for the generation of a magnetic field.
    As the outer core is molten iron and is a liquid it is affected by the Coriolis
    forces that come from a spinning earth these forces create chaotic movement any
    outer core much like the weather in our atmosphere and also like our weather,
    this outer core weather is driven by hate but instead of heat from the Sun
    its heat from the inner core. This sets up massive convection currents combined
    with the swirling effects of a Coriolis forces it creates weather systems and storms within the molten outer core. These
    storms create new temporary magnetic poles that can be opposite to the
    surrounding area. This is what the South Atlantic anomaly is, it’s a pole reversal
    and as such it negates the effect of a magnetic field around it, weakening it
    and allowing charged particles to come much closer to Earth. Like atmospheric
    storms these outer core storms can also merge and become larger which is how the
    South Atlantic anomaly is believed to have grown in strength. The swarm mission also has identified a molten iron jet stream within the outer core
    flowing westwards beneath Alaska and Siberia a speed around 50 kilometers per
    year. Although this may not sound a lot, this dense flow of iron takes a huge
    amount of energy to move and is estimated to be hundreds of millions of
    years old. This increasingly chaotic behavior in the outer core is thought to
    be the prelude to a complete pole reversal something which has
    happened hundreds of times before in Earth’s history about every 250,000
    years although the last time it happened was over 750 thousand years ago so it
    looks like we are due for another pole reversal any time. Although a pole
    reversal takes several thousand years to complete during this time the Earth’s
    magnetic field would weaken considerably and might consist of multiple north and
    south poles spread anywhere around the globe. A weaker magnetic field would bring charged particles closer to the earth
    and could create ozone holes like that which formed over the South Pole a few
    decades ago. This would increase the level of ultraviolet light getting through
    and increase the risk of things like skin cancer. It would also leave us more
    susceptible to solar storms which could wreak havoc with the world’s electrical
    power grid. By finding more weather patterns in the core, swarm will help
    predict the future shape and behavior of Earth’s magnetic field but in any case
    building tougher satellites is said to be big business in the coming decades as
    we see a surge in a number of small SATs. By launching networks of miniaturized
    satellites into low-earth orbits, governments and commercial satellite
    operators aim to replace heavy and expensive geostationary satellites at a
    fraction of the cost. Small SATs usually orbit below most of the radiation of the
    inner Van Allen belt and avoid the long-term degradation of electrics found
    on geostationary satellites which operate within the outer ventolin belt.
    However, small SATs have another problem they frequently pass through the South
    Atlantic anomaly. One company working on the problem is Thales Alenia space, a
    French-Italian aerospace company who are Europe’s largest satellite manufacturer.
    They are pioneering the use of gallium nitride on silicon chips which operate
    at higher voltages and temperatures than regular silicon chips, offering increased
    protection against radiation. In commercial applications the loss of
    several nodes in a satellite network would be an inconvenience,
    however if military satellite operators transition from geostationary to
    low-earth orbits more stats then the effects of the South Atlantic anomaly
    could introduce a significant vulnerability
    especially after solar storms. Whichever way we go what is happening deep beneath our feet is going to affect what happens high above us and ultimately what
    happens here on earth so we need to take steps to be prepared. So what’s your
    thoughts about the growing instability in the outer core and what do you think
    we should do to counter its possible effects on our way of life let me know
    in the comments below. Don’t forget you can also translate any of the How did NASA get those great film shots of Apollo and the Shuttle?
    droid videos with the community contributions and we also have the
    Curious Droid facebook page were you can suggest ideas for future videos. I’d also like to
    thank our patrons for their ongoing support and you can find out more by
    clicking on the link which is now showing. So as always thanks for watching and
    please subscribe, thumbs up and share.