The pursuit and application of knowledge and understanding of the natural and social world following a systematic methodology based on evidence.
  • 165 Posts
  • 159 Photos
  • 0 Videos
  • Consuntant at DMBC
  • Lives in Los Angeles
  • From Utah
  • Studied Computer Science at Master of Philosophy
  • Male
  • It's complicated
  • 19/03/1977
  • Followed by 2 people
Recent Updates
  • #Science_News #Science #Turbofan

    The turbofan was invented to improve the fuel consumption of the turbojet. It achieves this by pushing more air, thus increasing the mass and lowering the speed of the propelling jet compared to that of the turbojet. This is done mechanically by adding a ducted fan rather than using viscous forces by adding an ejector, as first envisaged by Whittle.

    Frank Whittle envisioned flight speeds of 500 mph in his March 1936 UK patent 471,368 "Improvements relating to the propulsion of aircraft", in which he describes the principles behind the turbofan, although not called as such at that time. While the turbojet uses the gas from its thermodynamic cycle as its propelling jet, for aircraft speeds below 500 mph there are two penalties to this design which are addressed by the turbofan.

    Firstly, energy is wasted as the propelling jet is going much faster rearwards than the aircraft is going forwards, leaving a very fast wake. This wake contains kinetic energy that reflects the fuel used to produce it, rather than the fuel used to move the aircraft forwards. A turbofan harvests that wasted velocity and uses it to power a ducted fan that blows air in bypass channels around the rest of the turbine. This reduces the speed of the propelling jet while pushing more air, and thus more mass.

    The other penalty is that combustion is less efficient at lower speeds. Any action to reduce the fuel consumption of the engine by increasing its pressure ratio or turbine temperature to achieve better combustion causes a corresponding increase in pressure and temperature in the exhaust duct which in turn cause a higher gas speed from the propelling nozzle (and higher KE and wasted fuel). Although the engine would use less fuel to produce a pound of thrust, more fuel is wasted in the faster propelling jet. In other words, the independence of thermal and propulsive efficiencies, as exists with the piston engine/propeller combination which preceded the turbojet, is lost. In contrast, Roth considers regaining this independence the single most important feature of the turbofan which allows specific thrust to be chosen independently of the gas generator cycle.

    The working substance of the thermodynamic cycle is the only mass accelerated to produce thrust in a turbojet which is a serious limitation (high fuel consumption) for aircraft speeds below supersonic. For subsonic flight speeds the speed of the propelling jet has to be reduced because there is a price to be paid in producing the thrust. The energy required to accelerate the gas inside the engine (increase in kinetic energy) is expended in two ways, by producing a change in momentum ( ie a force), and a wake which is an unavoidable consequence of producing thrust by an airbreathing engine (or propeller). The wake velocity, and fuel burned to produce it, can be reduced and the required thrust still maintained by increasing the mass accelerated. A turbofan does this by transferring energy available inside the engine, from the gas generator, to a ducted fan which produces a second, additional mass of accelerated air.

    The transfer of energy from the core to bypass air results in lower pressure and temperature gas entering the core nozzle (lower exhaust velocity) and fan-produced temperature and pressure entering the fan nozzle. The amount of energy transferred depends on how much pressure rise the fan is designed to produce (fan pressure ratio). The best energy exchange (lowest fuel consumption) between the two flows, and how the jet velocities compare, depends on how efficiently the transfer takes place which depends on the losses in the fan-turbine and fan.

    The fan flow has lower exhaust velocity, giving much more thrust per unit energy (lower specific thrust). Both airstreams contribute to the gross thrust of the engine. The additional air for the bypass stream increases the ram drag in the air intake stream-tube, but there is still a significant increase in net thrust. The overall effective exhaust velocity of the two exhaust jets can be made closer to a normal subsonic aircraft's flight speed and gets closer to the ideal Froude efficiency. A turbofan accelerates a larger mass of air more slowly, compared to a turbojet which accelerates a smaller amount more quickly, which is a less efficient way to generate the same thrust.

    The ratio of the mass-flow of air bypassing the engine core compared to the mass-flow of air passing through the core is referred to as the bypass ratio. Engines with more jet thrust relative to fan thrust are known as low-bypass turbofans, those that have considerably more fan thrust than jet thrust are known as high-bypass. Most commercial aviation jet engines in use today are high-bypass, and most modern fighter engines are low-bypass. Afterburners are used on low-bypass turbofans on combat aircraft.
    #Science_News #Science #Turbofan The turbofan was invented to improve the fuel consumption of the turbojet. It achieves this by pushing more air, thus increasing the mass and lowering the speed of the propelling jet compared to that of the turbojet. This is done mechanically by adding a ducted fan rather than using viscous forces by adding an ejector, as first envisaged by Whittle. Frank Whittle envisioned flight speeds of 500 mph in his March 1936 UK patent 471,368 "Improvements relating to the propulsion of aircraft", in which he describes the principles behind the turbofan, although not called as such at that time. While the turbojet uses the gas from its thermodynamic cycle as its propelling jet, for aircraft speeds below 500 mph there are two penalties to this design which are addressed by the turbofan. Firstly, energy is wasted as the propelling jet is going much faster rearwards than the aircraft is going forwards, leaving a very fast wake. This wake contains kinetic energy that reflects the fuel used to produce it, rather than the fuel used to move the aircraft forwards. A turbofan harvests that wasted velocity and uses it to power a ducted fan that blows air in bypass channels around the rest of the turbine. This reduces the speed of the propelling jet while pushing more air, and thus more mass. The other penalty is that combustion is less efficient at lower speeds. Any action to reduce the fuel consumption of the engine by increasing its pressure ratio or turbine temperature to achieve better combustion causes a corresponding increase in pressure and temperature in the exhaust duct which in turn cause a higher gas speed from the propelling nozzle (and higher KE and wasted fuel). Although the engine would use less fuel to produce a pound of thrust, more fuel is wasted in the faster propelling jet. In other words, the independence of thermal and propulsive efficiencies, as exists with the piston engine/propeller combination which preceded the turbojet, is lost. In contrast, Roth considers regaining this independence the single most important feature of the turbofan which allows specific thrust to be chosen independently of the gas generator cycle. The working substance of the thermodynamic cycle is the only mass accelerated to produce thrust in a turbojet which is a serious limitation (high fuel consumption) for aircraft speeds below supersonic. For subsonic flight speeds the speed of the propelling jet has to be reduced because there is a price to be paid in producing the thrust. The energy required to accelerate the gas inside the engine (increase in kinetic energy) is expended in two ways, by producing a change in momentum ( ie a force), and a wake which is an unavoidable consequence of producing thrust by an airbreathing engine (or propeller). The wake velocity, and fuel burned to produce it, can be reduced and the required thrust still maintained by increasing the mass accelerated. A turbofan does this by transferring energy available inside the engine, from the gas generator, to a ducted fan which produces a second, additional mass of accelerated air. The transfer of energy from the core to bypass air results in lower pressure and temperature gas entering the core nozzle (lower exhaust velocity) and fan-produced temperature and pressure entering the fan nozzle. The amount of energy transferred depends on how much pressure rise the fan is designed to produce (fan pressure ratio). The best energy exchange (lowest fuel consumption) between the two flows, and how the jet velocities compare, depends on how efficiently the transfer takes place which depends on the losses in the fan-turbine and fan. The fan flow has lower exhaust velocity, giving much more thrust per unit energy (lower specific thrust). Both airstreams contribute to the gross thrust of the engine. The additional air for the bypass stream increases the ram drag in the air intake stream-tube, but there is still a significant increase in net thrust. The overall effective exhaust velocity of the two exhaust jets can be made closer to a normal subsonic aircraft's flight speed and gets closer to the ideal Froude efficiency. A turbofan accelerates a larger mass of air more slowly, compared to a turbojet which accelerates a smaller amount more quickly, which is a less efficient way to generate the same thrust. The ratio of the mass-flow of air bypassing the engine core compared to the mass-flow of air passing through the core is referred to as the bypass ratio. Engines with more jet thrust relative to fan thrust are known as low-bypass turbofans, those that have considerably more fan thrust than jet thrust are known as high-bypass. Most commercial aviation jet engines in use today are high-bypass, and most modern fighter engines are low-bypass. Afterburners are used on low-bypass turbofans on combat aircraft.
    0 Tags 0 Shares 1 Views
  • #Science_News #Science #Porosomes #Biology

    Porosomes are cup-shaped supramolecular structures in the cell membranes of eukaryotic cells where secretory vesicles transiently dock in the process of vesicle fusion and secretion. The transient fusion of secretory vesicle membrane at a porosome, base via SNARE proteins, results in the formation of a fusion pore or continuity for the release of intravesicular contents from the cell. After secretion is complete, the fusion pore temporarily formed at the base of the porosome is sealed. Porosomes are few nanometers in size and contain many different types of protein, especially chloride and calcium channels, actin, and SNARE proteins that mediate the docking and fusion of the vesicles with the cell membrane. Once the vesicles have docked with the SNARE proteins, they swell, which increases their internal pressure. They then transiently fuse at the base of the porosome, and these pressurized contents are ejected from the cell. Examination of cells following secretion using electron microscopy, demonstrate increased presence of partially empty vesicles following secretion. This suggested that during the secretory process, only a portion of the vesicular contents are able to exit the cell. This could only be possible if the vesicle were to temporarily establish continuity with the cell plasma membrane, expel a portion of its contents, then detach, reseal, and withdraw into the cytosol (endocytose). In this way, the secretory vesicle could be reused for subsequent rounds of exo-endocytosis, until completely empty of its contents.

    Porosomes vary in size depending on the cell type. Porosome in the exocrine pancreas and in endocrine and neuroendocrine cells range from 100 nm to 180 nm in diameter while in neurons they range from 10 nm to 15 nm (about 1/10 the size of pancreatic porosomes). When a secretory vesicle containing v-SNARE docks at the porosome base containing t-SNARE, membrane continuity (ring complex) is formed between the two. The size of the t/v-SNARE complex is directly proportional to the size of the vesicle. These vesicles contain dehydrated proteins (non-active) which are activated once they are hydrated. GTP is required for the transport of water through the water channels or Aquaporins, and ions through ion channels to hydrate the vesicle. Once the vesicle fuses at the porosome base, the contents of the vesicle at high pressure are ejected from the cell.

    Generally the porosomes are opened and closed by actin, however, neurons require a fast response therefore they have central plugs that open to release contents and close to stop the release (the composition of the central plug is yet to be discovered). Porosomes have been demonstrated to be the universal secretory machinery in cells. The neuronal porosome proteome has been solved, providing the possible molecular architecture and the complete composition of the machinery.
    #Science_News #Science #Porosomes #Biology Porosomes are cup-shaped supramolecular structures in the cell membranes of eukaryotic cells where secretory vesicles transiently dock in the process of vesicle fusion and secretion. The transient fusion of secretory vesicle membrane at a porosome, base via SNARE proteins, results in the formation of a fusion pore or continuity for the release of intravesicular contents from the cell. After secretion is complete, the fusion pore temporarily formed at the base of the porosome is sealed. Porosomes are few nanometers in size and contain many different types of protein, especially chloride and calcium channels, actin, and SNARE proteins that mediate the docking and fusion of the vesicles with the cell membrane. Once the vesicles have docked with the SNARE proteins, they swell, which increases their internal pressure. They then transiently fuse at the base of the porosome, and these pressurized contents are ejected from the cell. Examination of cells following secretion using electron microscopy, demonstrate increased presence of partially empty vesicles following secretion. This suggested that during the secretory process, only a portion of the vesicular contents are able to exit the cell. This could only be possible if the vesicle were to temporarily establish continuity with the cell plasma membrane, expel a portion of its contents, then detach, reseal, and withdraw into the cytosol (endocytose). In this way, the secretory vesicle could be reused for subsequent rounds of exo-endocytosis, until completely empty of its contents. Porosomes vary in size depending on the cell type. Porosome in the exocrine pancreas and in endocrine and neuroendocrine cells range from 100 nm to 180 nm in diameter while in neurons they range from 10 nm to 15 nm (about 1/10 the size of pancreatic porosomes). When a secretory vesicle containing v-SNARE docks at the porosome base containing t-SNARE, membrane continuity (ring complex) is formed between the two. The size of the t/v-SNARE complex is directly proportional to the size of the vesicle. These vesicles contain dehydrated proteins (non-active) which are activated once they are hydrated. GTP is required for the transport of water through the water channels or Aquaporins, and ions through ion channels to hydrate the vesicle. Once the vesicle fuses at the porosome base, the contents of the vesicle at high pressure are ejected from the cell. Generally the porosomes are opened and closed by actin, however, neurons require a fast response therefore they have central plugs that open to release contents and close to stop the release (the composition of the central plug is yet to be discovered). Porosomes have been demonstrated to be the universal secretory machinery in cells. The neuronal porosome proteome has been solved, providing the possible molecular architecture and the complete composition of the machinery.
    0 Tags 0 Shares 1 Views
  • #Science_News #Science #Inertial_measurement_unit

    An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. When the magnetometer is included, IMUs are referred to as IMMUs. IMUs are typically used to maneuver modern vehicles including motorcycles, missiles, aircraft (an attitude and heading reference system), including unmanned aerial vehicles (UAVs), among many others, and spacecraft, including satellites and landers. Recent developments allow for the production of IMU-enabled GPS devices. An IMU allows a GPS receiver to work when GPS-signals are unavailable, such as in tunnels, inside buildings, or when electronic interference is present.

    Operational principles :

    An inertial measurement unit works by detecting linear acceleration using one or more accelerometers and rotational rate using one or more gyroscopes. Some also include a magnetometer which is commonly used as a heading reference. Typical configurations contain one accelerometer, gyro, and magnetometer per axis for each of the three principal axes: pitch, roll and yaw.

    Uses :

    IMUs are often incorporated into Inertial Navigation Systems which utilize the raw IMU measurements to calculate attitude, angular rates, linear velocity and position relative to a global reference frame. The IMU equipped INS forms the backbone for the navigation and control of many commercial and military vehicles such as crewed aircraft, missiles, ships, submarines, and satellites. IMUs are also essential components in the guidance and control of uncrewed systems such as UAVs, UGVs, and UUVs. Simpler versions of INSs termed Attitude and Heading Reference Systems utilize IMUs to calculate vehicle attitude with heading relative to magnetic north. The data collected from the IMU's sensors allows a computer to track craft's position, using a method known as dead reckoning.

    In land vehicles, an IMU can be integrated into GPS based automotive navigation systems or vehicle tracking systems, giving the system a dead reckoning capability and the ability to gather as much accurate data as possible about the vehicle's current speed, turn rate, heading, inclination and acceleration, in combination with the vehicle's wheel speed sensor output and, if available, reverse gear signal, for purposes such as better traffic collision analysis.

    Besides navigational purposes, IMUs serve as orientation sensors in many consumer products. Almost all smartphones and tablets contain IMUs as orientation sensors. Fitness trackers and other wearables may also include IMUs to measure motion, such as running. IMUs also have the ability to determine developmental levels of individuals when in motion by identifying specificity and sensitivity of specific parameters associated with running. Some gaming systems such as the remote controls for the Nintendo Wii use IMUs to measure motion. Low-cost IMUs have enabled the proliferation of the consumer drone industry. They are also frequently used for sports technology (technique training), and animation applications. They are a competing technology for use in motion capture technology. An IMU is at the heart of the balancing technology used in the Segway Personal Transporter.
    #Science_News #Science #Inertial_measurement_unit An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and sometimes the orientation of the body, using a combination of accelerometers, gyroscopes, and sometimes magnetometers. When the magnetometer is included, IMUs are referred to as IMMUs. IMUs are typically used to maneuver modern vehicles including motorcycles, missiles, aircraft (an attitude and heading reference system), including unmanned aerial vehicles (UAVs), among many others, and spacecraft, including satellites and landers. Recent developments allow for the production of IMU-enabled GPS devices. An IMU allows a GPS receiver to work when GPS-signals are unavailable, such as in tunnels, inside buildings, or when electronic interference is present. Operational principles : An inertial measurement unit works by detecting linear acceleration using one or more accelerometers and rotational rate using one or more gyroscopes. Some also include a magnetometer which is commonly used as a heading reference. Typical configurations contain one accelerometer, gyro, and magnetometer per axis for each of the three principal axes: pitch, roll and yaw. Uses : IMUs are often incorporated into Inertial Navigation Systems which utilize the raw IMU measurements to calculate attitude, angular rates, linear velocity and position relative to a global reference frame. The IMU equipped INS forms the backbone for the navigation and control of many commercial and military vehicles such as crewed aircraft, missiles, ships, submarines, and satellites. IMUs are also essential components in the guidance and control of uncrewed systems such as UAVs, UGVs, and UUVs. Simpler versions of INSs termed Attitude and Heading Reference Systems utilize IMUs to calculate vehicle attitude with heading relative to magnetic north. The data collected from the IMU's sensors allows a computer to track craft's position, using a method known as dead reckoning. In land vehicles, an IMU can be integrated into GPS based automotive navigation systems or vehicle tracking systems, giving the system a dead reckoning capability and the ability to gather as much accurate data as possible about the vehicle's current speed, turn rate, heading, inclination and acceleration, in combination with the vehicle's wheel speed sensor output and, if available, reverse gear signal, for purposes such as better traffic collision analysis. Besides navigational purposes, IMUs serve as orientation sensors in many consumer products. Almost all smartphones and tablets contain IMUs as orientation sensors. Fitness trackers and other wearables may also include IMUs to measure motion, such as running. IMUs also have the ability to determine developmental levels of individuals when in motion by identifying specificity and sensitivity of specific parameters associated with running. Some gaming systems such as the remote controls for the Nintendo Wii use IMUs to measure motion. Low-cost IMUs have enabled the proliferation of the consumer drone industry. They are also frequently used for sports technology (technique training), and animation applications. They are a competing technology for use in motion capture technology. An IMU is at the heart of the balancing technology used in the Segway Personal Transporter.
    0 Tags 0 Shares 1 Views
  • #Science_News #Science #Rolls_Royce_Merlin #Aero_engine

    The Rolls-Royce Merlin is a British liquid-cooled V-12 piston aero engine of 27-litres (1,650 cu in) capacity. Rolls-Royce designed the engine and first ran it in 1933 as a private venture. Initially known as the PV-12, it was later called Merlin following the company convention of naming its four-stroke piston aero engines after birds of prey.

    After several modifications, the first production variants of the PV-12 were completed in 1936. The first operational aircraft to enter service using the Merlin were the Fairey Battle, Hawker Hurricane and Supermarine Spitfire. The Merlin remains most closely associated with the Spitfire and Hurricane, although the majority of the production run was for the four-engined Avro Lancaster heavy bomber. A series of rapidly-applied developments, brought about by wartime needs, markedly improved the engine's performance and durability. Starting at 1,000 horsepower (750 kW) for the first production models, most late war versions produced just under 1,800 horsepower (1,300 kW), and the very latest version as used in the de Havilland Hornet over 2,000 horsepower (1,500 kW).

    One of the most successful aircraft engines of the World War II era, some 50 versions of the Merlin were built by Rolls-Royce in Derby, Crewe and Glasgow, as well as by Ford of Britain at their Trafford Park factory, near Manchester. A de-rated version was also the basis of the Rolls-Royce/Rover Meteor tank engine. Post-war, the Merlin was largely superseded by the Rolls-Royce Griffon for military use, with most Merlin variants being designed and built for airliners and military transport aircraft.

    The Packard V-1650 was a version of the Merlin built in the United States. Production ceased in 1950 after a total of almost 150,000 engines had been delivered. Merlin engines remain in Royal Air Force service today with the Battle of Britain Memorial Flight, and power many restored aircraft in private ownership worldwide.
    #Science_News #Science #Rolls_Royce_Merlin #Aero_engine The Rolls-Royce Merlin is a British liquid-cooled V-12 piston aero engine of 27-litres (1,650 cu in) capacity. Rolls-Royce designed the engine and first ran it in 1933 as a private venture. Initially known as the PV-12, it was later called Merlin following the company convention of naming its four-stroke piston aero engines after birds of prey. After several modifications, the first production variants of the PV-12 were completed in 1936. The first operational aircraft to enter service using the Merlin were the Fairey Battle, Hawker Hurricane and Supermarine Spitfire. The Merlin remains most closely associated with the Spitfire and Hurricane, although the majority of the production run was for the four-engined Avro Lancaster heavy bomber. A series of rapidly-applied developments, brought about by wartime needs, markedly improved the engine's performance and durability. Starting at 1,000 horsepower (750 kW) for the first production models, most late war versions produced just under 1,800 horsepower (1,300 kW), and the very latest version as used in the de Havilland Hornet over 2,000 horsepower (1,500 kW). One of the most successful aircraft engines of the World War II era, some 50 versions of the Merlin were built by Rolls-Royce in Derby, Crewe and Glasgow, as well as by Ford of Britain at their Trafford Park factory, near Manchester. A de-rated version was also the basis of the Rolls-Royce/Rover Meteor tank engine. Post-war, the Merlin was largely superseded by the Rolls-Royce Griffon for military use, with most Merlin variants being designed and built for airliners and military transport aircraft. The Packard V-1650 was a version of the Merlin built in the United States. Production ceased in 1950 after a total of almost 150,000 engines had been delivered. Merlin engines remain in Royal Air Force service today with the Battle of Britain Memorial Flight, and power many restored aircraft in private ownership worldwide.
    0 Tags 0 Shares 1 Views
  • #Science_News #Science #Corliss_steam_engine #Steam_engine

    A Corliss steam engine is a steam engine, fitted with rotary valves and with variable valve timing patented in 1849, invented by and named after the US engineer George Henry Corliss of Providence, Rhode Island.

    Engines fitted with Corliss valve gear offered the best thermal efficiency of any type of stationary steam engine until the refinement of the uniflow steam engine and steam turbine in the 20th century. Corliss engines were generally about 30 percent more fuel efficient than conventional steam engines with fixed cutoff. This increased efficiency made steam power more economical than water power, allowing industrial development away from millponds.

    Corliss engines were typically used as stationary engines to provide mechanical power to line shafting in factories and mills and to drive dynamos to generate electricity. Many were quite large, standing many metres tall and developing several hundred horsepower, albeit at low speed, turning massive flywheels weighing several tons at about 100 revolutions per minute. Some of these engines have unusual roles as mechanical legacy systems and because of their relatively high efficiency and low maintenance requirements, some remain in service into the early 21st century. See, for example, the engines at the Hook Norton Brewery and the Distillerie Dillon in the list of operational engines.

    Corliss engine mechanisms :

    Corliss engines have four valves for each cylinder, with steam and exhaust valves located at each end. Corliss engines incorporate distinct refinements in both the valves themselves and in the valve gear, that is, the system of linkages that operate the valves.

    The use of separate valves for steam admission and exhaust means that neither the valves nor the steam passages between cylinders and valves need to change temperature during the power and exhaust cycle, and it means that the timing of the admission and exhaust valves can be independently controlled. In contrast, conventional steam engines have a slide valve or piston valve that alternately feeds and exhausts through passages to each end of the cylinder. These passages are exposed to wide temperature swings during engine operation, and there are high temperature gradients within the valve mechanism.

    Clark (1891) commented that the Corliss gear "is essentially a combination of elements previously known and used separately, affecting the cylinder and the valve-gear". The origins of the Corliss gear with regard to previous steam valve gear was traced by Inglis (1868).
    #Science_News #Science #Corliss_steam_engine #Steam_engine A Corliss steam engine is a steam engine, fitted with rotary valves and with variable valve timing patented in 1849, invented by and named after the US engineer George Henry Corliss of Providence, Rhode Island. Engines fitted with Corliss valve gear offered the best thermal efficiency of any type of stationary steam engine until the refinement of the uniflow steam engine and steam turbine in the 20th century. Corliss engines were generally about 30 percent more fuel efficient than conventional steam engines with fixed cutoff. This increased efficiency made steam power more economical than water power, allowing industrial development away from millponds. Corliss engines were typically used as stationary engines to provide mechanical power to line shafting in factories and mills and to drive dynamos to generate electricity. Many were quite large, standing many metres tall and developing several hundred horsepower, albeit at low speed, turning massive flywheels weighing several tons at about 100 revolutions per minute. Some of these engines have unusual roles as mechanical legacy systems and because of their relatively high efficiency and low maintenance requirements, some remain in service into the early 21st century. See, for example, the engines at the Hook Norton Brewery and the Distillerie Dillon in the list of operational engines. Corliss engine mechanisms : Corliss engines have four valves for each cylinder, with steam and exhaust valves located at each end. Corliss engines incorporate distinct refinements in both the valves themselves and in the valve gear, that is, the system of linkages that operate the valves. The use of separate valves for steam admission and exhaust means that neither the valves nor the steam passages between cylinders and valves need to change temperature during the power and exhaust cycle, and it means that the timing of the admission and exhaust valves can be independently controlled. In contrast, conventional steam engines have a slide valve or piston valve that alternately feeds and exhausts through passages to each end of the cylinder. These passages are exposed to wide temperature swings during engine operation, and there are high temperature gradients within the valve mechanism. Clark (1891) commented that the Corliss gear "is essentially a combination of elements previously known and used separately, affecting the cylinder and the valve-gear". The origins of the Corliss gear with regard to previous steam valve gear was traced by Inglis (1868).
    0 Tags 0 Shares 1 Views
  • #Science_News #Science #Apparent_magnitude

    Apparent magnitude (m) is a measure of the brightness of a star or other astronomical object observed from Earth. An object's apparent magnitude depends on its intrinsic luminosity, its distance from Earth, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer.

    The word magnitude in astronomy, unless stated otherwise, usually refers to a celestial object's apparent magnitude. The magnitude scale dates back to the ancient Roman astronomer Claudius Ptolemy, whose star catalog listed stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined in a way to closely match this historical system.

    The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to a brightness ratio about 2.512. For example, a star of magnitude 2.0 is 2.512 times as bright as a star of magnitude 3.0, 6.31 times as bright as a star of magnitude 4.0, and 100 times as bright as one of magnitude 7.0.

    Differences in astronomical magnitudes can also be related to another logarithmic ratio scale, the decibel: an increase of one astronomical magnitude is exactly equal to a decrease of 4 decibels (dB).

    The brightest astronomical objects have negative apparent magnitudes: for example, Venus at −4.2 or Sirius at −1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions. The apparent magnitudes of known objects range from the Sun at −26.832 to objects in deep Hubble Space Telescope images of magnitude +31.5.

    The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system.

    Absolute magnitude is a measure of the intrinsic luminosity of a celestial object, rather than its apparent brightness, and is expressed on the same reverse logarithmic scale. Absolute magnitude is defined as the apparent magnitude that a star or object would have if it were observed from a distance of 10 parsecs (33 light-years; 3.1×1014 kilometres; 1.9×1014 miles). Therefore, it is of greater use in stellar astrophysics since it refers to a property of a star regardless of how close it is to Earth. But in observational astronomy and popular stargazing, unqualified references to "magnitude" are understood to mean apparent magnitude.

    Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. This can be useful as a way of monitoring the spread of light pollution.

    Apparent magnitude is really a measure of illuminance, which can also be measured in photometric units such as lux.
    #Science_News #Science #Apparent_magnitude Apparent magnitude (m) is a measure of the brightness of a star or other astronomical object observed from Earth. An object's apparent magnitude depends on its intrinsic luminosity, its distance from Earth, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer. The word magnitude in astronomy, unless stated otherwise, usually refers to a celestial object's apparent magnitude. The magnitude scale dates back to the ancient Roman astronomer Claudius Ptolemy, whose star catalog listed stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined in a way to closely match this historical system. The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to a brightness ratio about 2.512. For example, a star of magnitude 2.0 is 2.512 times as bright as a star of magnitude 3.0, 6.31 times as bright as a star of magnitude 4.0, and 100 times as bright as one of magnitude 7.0. Differences in astronomical magnitudes can also be related to another logarithmic ratio scale, the decibel: an increase of one astronomical magnitude is exactly equal to a decrease of 4 decibels (dB). The brightest astronomical objects have negative apparent magnitudes: for example, Venus at −4.2 or Sirius at −1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions. The apparent magnitudes of known objects range from the Sun at −26.832 to objects in deep Hubble Space Telescope images of magnitude +31.5. The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system. Absolute magnitude is a measure of the intrinsic luminosity of a celestial object, rather than its apparent brightness, and is expressed on the same reverse logarithmic scale. Absolute magnitude is defined as the apparent magnitude that a star or object would have if it were observed from a distance of 10 parsecs (33 light-years; 3.1×1014 kilometres; 1.9×1014 miles). Therefore, it is of greater use in stellar astrophysics since it refers to a property of a star regardless of how close it is to Earth. But in observational astronomy and popular stargazing, unqualified references to "magnitude" are understood to mean apparent magnitude. Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. This can be useful as a way of monitoring the spread of light pollution. Apparent magnitude is really a measure of illuminance, which can also be measured in photometric units such as lux.
    0 Tags 0 Shares 1 Views
  • #Science_News #Science #Protein_biosynthesis #Biology

    Protein biosynthesis (or protein synthesis) is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences.

    Protein synthesis can be divided broadly into two phases - transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a template molecule called messenger RNA (mRNA). This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this mRNA is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the cell nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyze the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain.

    Following translation the polypeptide chain must fold to form a functional protein; for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. To adopt a functional three-dimensional (3D) shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications. Post-translational modifications can alter the protein's ability to function, where it is located within the cell (e.g. cytoplasm or nucleus) and the protein's ability to interact with other proteins.

    Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the protein's ability to function or to fold correctly. Misfolded proteins are often implicated in disease as improperly folded proteins have a tendency to stick together to form dense protein clumps. These clumps are linked to a range of diseases, often neurological, including Alzheimer's disease and Parkinson's disease.
    #Science_News #Science #Protein_biosynthesis #Biology Protein biosynthesis (or protein synthesis) is a core biological process, occurring inside cells, balancing the loss of cellular proteins (via degradation or export) through the production of new proteins. Proteins perform a number of critical functions as enzymes, structural proteins or hormones. Protein synthesis is a very similar process for both prokaryotes and eukaryotes but there are some distinct differences. Protein synthesis can be divided broadly into two phases - transcription and translation. During transcription, a section of DNA encoding a protein, known as a gene, is converted into a template molecule called messenger RNA (mRNA). This conversion is carried out by enzymes, known as RNA polymerases, in the nucleus of the cell. In eukaryotes, this mRNA is initially produced in a premature form (pre-mRNA) which undergoes post-transcriptional modifications to produce mature mRNA. The mature mRNA is exported from the cell nucleus via nuclear pores to the cytoplasm of the cell for translation to occur. During translation, the mRNA is read by ribosomes which use the nucleotide sequence of the mRNA to determine the sequence of amino acids. The ribosomes catalyze the formation of covalent peptide bonds between the encoded amino acids to form a polypeptide chain. Following translation the polypeptide chain must fold to form a functional protein; for example, to function as an enzyme the polypeptide chain must fold correctly to produce a functional active site. To adopt a functional three-dimensional (3D) shape, the polypeptide chain must first form a series of smaller underlying structures called secondary structures. The polypeptide chain in these secondary structures then folds to produce the overall 3D tertiary structure. Once correctly folded, the protein can undergo further maturation through different post-translational modifications. Post-translational modifications can alter the protein's ability to function, where it is located within the cell (e.g. cytoplasm or nucleus) and the protein's ability to interact with other proteins. Protein biosynthesis has a key role in disease as changes and errors in this process, through underlying DNA mutations or protein misfolding, are often the underlying causes of a disease. DNA mutations change the subsequent mRNA sequence, which then alters the mRNA encoded amino acid sequence. Mutations can cause the polypeptide chain to be shorter by generating a stop sequence which causes early termination of translation. Alternatively, a mutation in the mRNA sequence changes the specific amino acid encoded at that position in the polypeptide chain. This amino acid change can impact the protein's ability to function or to fold correctly. Misfolded proteins are often implicated in disease as improperly folded proteins have a tendency to stick together to form dense protein clumps. These clumps are linked to a range of diseases, often neurological, including Alzheimer's disease and Parkinson's disease.
    0 Tags 0 Shares 1 Views
  • #Science_News #Science #Gravitational_microlensing

    Gravitational microlensing is an astronomical phenomenon due to the gravitational lens effect. It can be used to detect objects that range from the mass of a planet to the mass of a star, regardless of the light they emit. Typically, astronomers can only detect bright objects that emit much light (stars) or large objects that block background light (clouds of gas and dust). These objects make up only a minor portion of the mass of a galaxy. Microlensing allows the study of objects that emit little or no light. Gravitational microlensing was first theorised by Refstal (1964) and first discovered by Irwin et al (1988). The first object in the sky where it was discovered was the Einstein cross or Huchra lens 2237 +0305. The initial lightcurve of the object was published by Corrigan et al (1991). In Corrigan et al (1991) they calculated that the object causing the microlensing was a Jupiter sized object. This was the first discovery of a planet in another galaxy.

    When a distant star or quasar gets sufficiently aligned with a massive compact foreground object, the bending of light due to its gravitational field, as discussed by Albert Einstein in 1915, leads to two distorted images (generally unresolved), resulting in an observable magnification. The time-scale of the transient brightening depends on the mass of the foreground object as well as on the relative proper motion between the background 'source' and the foreground 'lens' object.

    Ideally aligned microlensing produces a clear buffer between the radiation from the lens and source objects. It magnifies the distant source, revealing it or enhancing its size and/or brightness. It enables the study of the population of faint or dark objects such as brown dwarfs, red dwarfs, planets, white dwarfs, neutron stars, black holes, and massive compact halo objects. Such lensing works at all wavelengths, magnifying and producing a wide range of possible warping for distant source objects that emit any kind of electromagnetic radiation.

    Microlensing by an isolated object was first detected in 1989. Since then, microlensing has been used to constrain the nature of the dark matter, detect exoplanets, study limb darkening in distant stars, constrain the binary star population, and constrain the structure of the Milky Way's disk. Microlensing has also been proposed as a means to find dark objects like brown dwarfs and black holes, study starspots, measure stellar rotation, and probe quasars including their accretion disks. Microlensing was used in 2018 to detect Icarus, then the most distant star ever observed.
    #Science_News #Science #Gravitational_microlensing Gravitational microlensing is an astronomical phenomenon due to the gravitational lens effect. It can be used to detect objects that range from the mass of a planet to the mass of a star, regardless of the light they emit. Typically, astronomers can only detect bright objects that emit much light (stars) or large objects that block background light (clouds of gas and dust). These objects make up only a minor portion of the mass of a galaxy. Microlensing allows the study of objects that emit little or no light. Gravitational microlensing was first theorised by Refstal (1964) and first discovered by Irwin et al (1988). The first object in the sky where it was discovered was the Einstein cross or Huchra lens 2237 +0305. The initial lightcurve of the object was published by Corrigan et al (1991). In Corrigan et al (1991) they calculated that the object causing the microlensing was a Jupiter sized object. This was the first discovery of a planet in another galaxy. When a distant star or quasar gets sufficiently aligned with a massive compact foreground object, the bending of light due to its gravitational field, as discussed by Albert Einstein in 1915, leads to two distorted images (generally unresolved), resulting in an observable magnification. The time-scale of the transient brightening depends on the mass of the foreground object as well as on the relative proper motion between the background 'source' and the foreground 'lens' object. Ideally aligned microlensing produces a clear buffer between the radiation from the lens and source objects. It magnifies the distant source, revealing it or enhancing its size and/or brightness. It enables the study of the population of faint or dark objects such as brown dwarfs, red dwarfs, planets, white dwarfs, neutron stars, black holes, and massive compact halo objects. Such lensing works at all wavelengths, magnifying and producing a wide range of possible warping for distant source objects that emit any kind of electromagnetic radiation. Microlensing by an isolated object was first detected in 1989. Since then, microlensing has been used to constrain the nature of the dark matter, detect exoplanets, study limb darkening in distant stars, constrain the binary star population, and constrain the structure of the Milky Way's disk. Microlensing has also been proposed as a means to find dark objects like brown dwarfs and black holes, study starspots, measure stellar rotation, and probe quasars including their accretion disks. Microlensing was used in 2018 to detect Icarus, then the most distant star ever observed.
    0 Tags 0 Shares 1 Views
  • #Science_News #Science #Analytical_mechanics #Physics

    In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related alternative formulations of classical mechanics. It was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Since Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system, an alternative name for the mechanics governed by Newton's laws and Euler's laws is vectorial mechanics.

    By contrast, analytical mechanics uses scalar properties of motion representing the system as a whole—usually its total kinetic energy and potential energy—not Newton's vectorial forces of individual particles. A scalar is a quantity, whereas a vector is represented by quantity and direction. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation.

    Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics.

    Two dominant branches of analytical mechanics are Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta, therefore both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries.

    Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory.

    Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory.

    The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics.
    #Science_News #Science #Analytical_mechanics #Physics In theoretical physics and mathematical physics, analytical mechanics, or theoretical mechanics is a collection of closely related alternative formulations of classical mechanics. It was developed by many scientists and mathematicians during the 18th century and onward, after Newtonian mechanics. Since Newtonian mechanics considers vector quantities of motion, particularly accelerations, momenta, forces, of the constituents of the system, an alternative name for the mechanics governed by Newton's laws and Euler's laws is vectorial mechanics. By contrast, analytical mechanics uses scalar properties of motion representing the system as a whole—usually its total kinetic energy and potential energy—not Newton's vectorial forces of individual particles. A scalar is a quantity, whereas a vector is represented by quantity and direction. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Analytical mechanics takes advantage of a system's constraints to solve problems. The constraints limit the degrees of freedom the system can have, and can be used to reduce the number of coordinates needed to solve for the motion. The formalism is well suited to arbitrary choices of coordinates, known in the context as generalized coordinates. The kinetic and potential energies of the system are expressed using these generalized coordinates or momenta, and the equations of motion can be readily set up, thus analytical mechanics allows numerous mechanical problems to be solved with greater efficiency than fully vectorial methods. It does not always work for non-conservative forces or dissipative forces like friction, in which case one may revert to Newtonian mechanics. Two dominant branches of analytical mechanics are Lagrangian mechanics (using generalized coordinates and corresponding generalized velocities in configuration space) and Hamiltonian mechanics (using coordinates and corresponding momenta in phase space). Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta, therefore both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries. Analytical mechanics does not introduce new physics and is not more general than Newtonian mechanics. Rather it is a collection of equivalent formalisms which have broad application. In fact the same principles and formalisms can be used in relativistic mechanics and general relativity, and with some modifications, quantum mechanics and quantum field theory. Analytical mechanics is used widely, from fundamental physics to applied mathematics, particularly chaos theory. The methods of analytical mechanics apply to discrete particles, each with a finite number of degrees of freedom. They can be modified to describe continuous fields or fluids, which have infinite degrees of freedom. The definitions and equations have a close analogy with those of mechanics.
    0 Tags 0 Shares 1 Views
  • #Science_News #Science #Excited_state #Quantum_mechanics

    In quantum mechanics, an excited state of a system (such as an atom, molecule or nucleus) is any quantum state of the system that has a higher energy than the ground state (that is, more energy than the absolute minimum). Excitation refers to an increase in energy level above a chosen starting point, usually the ground state, but sometimes an already excited state. The temperature of a group of particles is indicative of the level of excitation (with the notable exception of systems that exhibit negative temperature).

    The lifetime of a system in an excited state is usually short: spontaneous or induced emission of a quantum of energy (such as a photon or a phonon) usually occurs shortly after the system is promoted to the excited state, returning the system to a state with lower energy (a less excited state or the ground state). This return to a lower energy level is often loosely described as decay and is the inverse of excitation.

    Long-lived excited states are often called metastable. Long-lived nuclear isomers and singlet oxygen are two examples of this.

    Atomic excitation :

    Atoms can be excited by heat, electricity, or light. The hydrogen atom provides a simple example of this concept. The ground state of the hydrogen atom has the atom's single electron in the lowest possible orbital (that is, the spherically symmetric "1s" wave function, which, so far, has been demonstrated to have the lowest possible quantum numbers). By giving the atom additional energy (for example, by absorption of a photon of an appropriate energy), the electron moves into an excited state (one with one or more quantum numbers greater than the minimum possible). If the photon has too much energy, the electron will cease to be bound to the atom, and the atom will become ionized.

    After excitation the atom may return to the ground state or a lower excited state, by emitting a photon with a characteristic energy. Emission of photons from atoms in various excited states leads to an electromagnetic spectrum showing a series of characteristic emission lines (including, in the case of the hydrogen atom, the Lyman, Balmer, Paschen and Brackett series).

    An atom in a high excited state is termed a Rydberg atom. A system of highly excited atoms can form a long-lived condensed excited state e.g. a condensed phase made completely of excited atoms: Rydberg matter.
    #Science_News #Science #Excited_state #Quantum_mechanics In quantum mechanics, an excited state of a system (such as an atom, molecule or nucleus) is any quantum state of the system that has a higher energy than the ground state (that is, more energy than the absolute minimum). Excitation refers to an increase in energy level above a chosen starting point, usually the ground state, but sometimes an already excited state. The temperature of a group of particles is indicative of the level of excitation (with the notable exception of systems that exhibit negative temperature). The lifetime of a system in an excited state is usually short: spontaneous or induced emission of a quantum of energy (such as a photon or a phonon) usually occurs shortly after the system is promoted to the excited state, returning the system to a state with lower energy (a less excited state or the ground state). This return to a lower energy level is often loosely described as decay and is the inverse of excitation. Long-lived excited states are often called metastable. Long-lived nuclear isomers and singlet oxygen are two examples of this. Atomic excitation : Atoms can be excited by heat, electricity, or light. The hydrogen atom provides a simple example of this concept. The ground state of the hydrogen atom has the atom's single electron in the lowest possible orbital (that is, the spherically symmetric "1s" wave function, which, so far, has been demonstrated to have the lowest possible quantum numbers). By giving the atom additional energy (for example, by absorption of a photon of an appropriate energy), the electron moves into an excited state (one with one or more quantum numbers greater than the minimum possible). If the photon has too much energy, the electron will cease to be bound to the atom, and the atom will become ionized. After excitation the atom may return to the ground state or a lower excited state, by emitting a photon with a characteristic energy. Emission of photons from atoms in various excited states leads to an electromagnetic spectrum showing a series of characteristic emission lines (including, in the case of the hydrogen atom, the Lyman, Balmer, Paschen and Brackett series). An atom in a high excited state is termed a Rydberg atom. A system of highly excited atoms can form a long-lived condensed excited state e.g. a condensed phase made completely of excited atoms: Rydberg matter.
    0 Tags 0 Shares 1 Views
More Stories

Password Copied!

Please Wait....