Search Engine

Wednesday, December 9, 2009

Passive analogue filter development

Analogue filters are a basic building block of signal processing much used in electronics. Amongst their many applications are the separation of an audio signal before application to bass, mid-range and tweeter loudspeakers; the combining and later separation of multiple telephone conversations onto a single channel; the selection of a chosen radio station in a radio receiver and rejection of others. Passive linear electronic analogue filters are those filters which can be described with linear differential equations (linear); they are composed of capacitors, inductors and, sometimes, resistors (passive) and are designed to operate on continuously varying (analogue) signals. There are many linear filters which are not analogue in implementation (digital filter), and there are many electronic filters which may not have a passive topology – both of which may have the same transfer function of the filters described in this article. Analogue filters are most often used in wave filtering applications, that is, where it is required to pass particular frequency components and to reject others from analogue (continuous-time) signals.
Analogue filters have played an important part in the development of electronics. Especially in the field of telecommunications, filters have been of crucial importance in a number of technological breakthroughs and have been the source of enormous profits for telecommunications companies. It should come as no surprise, therefore, that the early development of filters was intimately connected with transmission lines. Transmission line theory gave rise to filter theory, which initially took a very similar form, and the main application of filters was for use on telecommunication transmission lines. However, the arrival of network synthesis techniques greatly enhanced the degree of control of the designer.Today, it is often preferred to carry out filtering in the digital domain where complex algorithms are much easier to implement, but analogue filters do still find applications, especially for low-order simple filtering tasks and are often still the norm at higher frequencies where digital technology is still impractical, or at least, less cost effective. Wherever possible, and especially at low frequencies, analogue filters are now implemented in a filter topology which is active in order to avoid the wound components required by passive topology.It is possible to design linear analogue mechanical filters using mechanical components which filter mechanical vibrations or acoustic waves. While there are few applications for such devices in mechanics per se, they can be used in electronics with the addition of transducers to convert to and from the electrical domain. Indeed some of the earliest ideas for filters were acoustic resonators because the electronics technology was poorly understood at the time. In principle, the design of such filters can be achieved entirely in terms of the electronic counterparts of mechanical quantities, with kinetic energy, potential energy and heat energy corresponding to the energy in inductors, capacitors and resistors respectively.

Saturday, December 5, 2009

Transistor

A transistor is a semiconductor device commonly used to amplify or switch electronic signals. A transistor is made of a solid piece of a semiconductor material, with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals changes the current flowing through another pair of terminals. Because the controlled (output) power can be much more than the controlling (input) power, the transistor provides amplification of a signal. Some transistors are packaged individually but most are found in integrated circuits.
History:-

Physicist Julius Edgar Lilienfeld filed the first patent for a transistor in Canada in 1925, describing a device similar to a Field Effect Transistor or "FET".However, Lilienfeld did not publish any research articles about his devices,[citation needed] and in 1934, German inventor Oskar Heil patented a similar device.In 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in the United States observed that when electrical contacts were applied to a crystal of germanium, the output power was larger than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors, and thus could be described as the "father of the transistor". The term was coined by John R. Pierce. According to physicist/historian Robert Arns, legal papers from the Bell Labs patent show that William Shockley and Gerald Pearson had built operational versions from Lilienfeld's patents, yet they never referenced this work in any of their later research papers or historical articles.The first silicon transistor was produced by Texas Instruments in 1954.[5] This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs. The first MOS transistor actually built was by Kahng and Atalla at Bell Labs in 1960.
Importants:-
The transistor is considered by many to be one of the greatest inventions of the twentieth century. The transistor is the key active component in practically all modern electronics. Its importance in today's society rests on its ability to be mass produced using a highly automated process (fabrication) that achieves astonishingly low per-transistor costs.Although several companies each produce over a billion individually-packaged (known as discrete) transistors every year, the vast majority of transistors produced are in integrated circuits (often shortened to IC, microchips or simply chips) along with diodes, resistors, capacitors and other electronic components to produce complete electronic circuits. A logic gate consists of up to about twenty transistors whereas an advanced microprocessor, as of 2006, can use as many as 1.7 billion transistors (MOSFETs). "About 60 million transistors were built this year [2002] ... for [each] man, woman, and child on Earth.The transistor's low cost, flexibility, and reliability have made it a ubiquitous device. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical control function.
Uses:-
The bipolar junction transistor, or BJT, was the most commonly used transistor in the 1960s and 70s. Even after MOSFETs became widely available, the BJT remained the transistor of choice for many analog circuits such as simple amplifiers because of their greater linearity and ease of manufacture. Desirable properties of MOSFETs, such as their utility in low-power devices, usually in the CMOS configuration, allowed them to capture nearly all market share for digital circuits; more recently MOSFETs have captured most analog and power applications as well, including modern clocked analog circuits, voltage regulators, amplifiers, power transmitters, motor drivers, etc.

Electrical impedance

Electrical impedance, or simply impedance, describes a measure of opposition to alternating current (AC). Electrical impedance extends the concept of resistance to AC circuits, describing not only the relative amplitudes of the voltage and current, but also the relative phases. When the circuit is driven with direct current (DC) there is no distinction between impedance and resistance; the latter can be thought of as impedance with zero phase angle.The symbol for impedance is usually \scriptstyle Z and it may be represented by writing its magnitude and phase in the form \scriptstyle Z \angle \theta . However, complex number representation is more powerful for circuit analysis purposes. The term impedance was coined by Oliver Heaviside in July 1886.Arthur Kennelly was the first to represent impedance with complex numbers in 1893.Impedance is defined as the frequency domain ratio of the voltage to the current. In other words, it is voltage–current ratio for a single complex exponential at a particular frequency ω. In general, impedance will be a complex number, but this complex number has the same units as resistance, for which the SI unit is the ohm. For a sinusoidal current or voltage input, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular,

Friday, December 4, 2009

Hall effect sensor

A Hall effect sensor is a transducer that varies its output voltage in response to changes in magnetic field. Hall sensors are used for proximity switching, positioning, speed detection, and current sensing applications.In its simplest form, the sensor operates as an analogue transducer, directly returning a voltage. With a known magnetic field, its distance from the Hall plate can be determined. Using groups of sensors, the relative position of the magnet can be deduced.Electricity carried through a conductor will produce a magnetic field that varies with current, and a Hall sensor can be used to measure the current without interrupting the circuit. Typically, the sensor is integrated with a wound core or permanent magnet that surrounds the conductor to be measured.Frequently, a Hall sensor is combined with circuitry that allows the device to act in a digital (on/off) mode, and may be called a switch in this configuration. Commonly seen in industrial applications such as the pictured pneumatic cylinder, they are also used in consumer equipment; for example some computer printers use them to detect missing paper and open covers. When high reliability is required, they are used in keyboards.Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing or tachometers. They are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel carrying two equally spaced magnets, the voltage from the sensor will peak twice for each revolution. This arrangement is commonly used to regulate the speed of disc drives.
History:-
A hall probe contains an
indium compound crystal mounted on an aluminum backing plate, and encapsulated in the probe head. The plane of the crystal is perpendicular to the probe handle. Connecting leads from the crystal are brought down through the handle to the circuit box.When the Hall Probe is held so that the magnetic field lines are passing at right angles through the sensor of the probe, the meter gives a reading of the value of magnetic flux density (B). A current is passed through the crystal which, when placed in a magnetic field has a “Hall Effect” voltage developed across it. The Hall Effect is seen when a conductor is passed through a uniform magnetic field. The natural electron drift of the charge carriers causes the magnetic field to apply a Lorentz force (the force exerted on a charged particle in an electromagnetic field) to these charge carriers. The result is what is seen as a charge separation, with a build up of either positive or negative charges on the bottom or on the top of the plate. The crystal measures 5 mm square. The probe handle, being made of a non-ferrous material, has no disturbing effect on the field.A Hall Probe is sensitive enough to measure the Earth's magnetic field. It must be held so that the Earth's field lines are passing directly through it. It is then rotated quickly so the field lines pass through the sensor in the opposite direction. The change in the flux density reading is double the Earth's magnetic flux density. A hall probe must first be calibrated against a known value of magnetic field strength. For a solenoid the hall probe is placed in the center

Thursday, December 3, 2009

PID controller

A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then instigating a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal.
General:-

The PID controller calculation (algorithm) involves three separate parameters; the proportional, the integral and derivative values. The proportional value determines the reaction to the current error, the integral value determines the reaction based on the sum of recent errors, and the derivative value determines the reaction based on the rate at which the error has been changing. The weighted sum of these three actions is used to adjust the process via a control element such as the position of a control valve or the power supply of a heating element.By tuning the three constants in the PID controller algorithm, the controller can provide control action designed for specific process requirements. The response of the controller can be described in terms of the responsiveness of the controller to an error, the degree to which the controller overshoots the setpoint and the degree of system oscillation. Note that the use of the PID algorithm for control does not guarantee optimal control of the system or system stability.
Some applications may require using only one or two modes to provide the appropriate system control. This is achieved by setting the gain of undesired control outputs to zero. A PID controller will be called a PI, PD, P or I controller in the absence of the respective control actions. PI controllers are particularly common, since derivative action is very sensitive to measurement noise, and the absence of an integral value may prevent the system from reaching its target value due to the control action.
Note: Due to the diversity of the field of control theory and application, many naming conventions for the relevant variables are in common use.
History:-
A familiar example of a control loop is the action taken to keep one's shower water at the ideal temperature, which typically involves the mixing of two process streams, cold and hot water. The person feels the water to estimate its temperature. Based on this measurement they perform a control action: use the cold water tap to adjust the process. The person would repeat this input-output control loop, adjusting the hot water flow until the process temperature stabilized at the desired value.Feeling the water temperature is taking a measurement of the process value or process variable (PV). The desired temperature is called the setpoint (SP). The output from the controller and input to the process (the tap position) is called the manipulated variable (MV). The difference between the measurement and the setpoint is the error (e), too hot or too cold and by how much.As a controller, one decides roughly how much to change the tap position (MV) after one determines the temperature (PV), and therefore the error. This first estimate is the equivalent of the proportional action of a PID controller. The integral action of a PID controller can be thought of as gradually adjusting the temperature when it is almost right. Derivative action can be thought of as noticing the water temperature is getting hotter or colder, and how fast, anticipating further change and tempering adjustments for a soft landing at the desired temperature (SP).Making a change that is too large when the error is small is equivalent to a high gain controller and will lead to overshoot. If the controller were to repeatedly make changes that were too large and repeatedly overshoot the target, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the oscillations increase with time then the system is unstable, whereas if they decay the system is stable. If the oscillations remain at a constant magnitude the system is marginally stable. A human would not do this because we are adaptive controllers, learning from the process history, but PID controllers do not have the ability to learn and must be set up correctly. Selecting the correct gains for effective control is known as tuning the controller.If a controller starts from a stable state at zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that impact on the process, and hence on the PV. Variables that impact on the process other than the MV are known as disturbances. Generally controllers are used to reject disturbances and/or implement setpoint changes. Changes in feed water temperature constitute a disturbance to the shower process.In theory, a controller can be used to control any process which has a measurable output (PV), a known ideal value for that output (SP) and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, flow rate, chemical composition, speed and practically every other variable for which a measurement exists. Automobile cruise control is an example of a process which utilizes automated control.
Due to their long history, simplicity, well grounded theory and simple setup and maintenance requirements, PID controllers are the controllers of choice for many of these applications.

Wednesday, December 2, 2009

Inductance

Inductance is the property in an electrical circuit where a change in the electric current through that circuit induces an electromotive force (EMF) that opposes the change in current (See Induced EMF).In electrical circuits, any electric current i produces a magnetic field and hence generates a total magnetic flux Φ acting on the circuit. This magnetic flux, due to Lenz's law, tends to act to oppose changes in the flux by generating a voltage (a back EMF) in the circuit that counters or tends to reduce the rate of change in the current. The ratio of the magnetic flux to the current is called the self-inductance, which is usually simply referred to as the inductance of the circuit. To add inductance to a circuit, electronic components called inductors are used, which consist of coils of wire to concentrate the magnetic field.The term 'inductance' was coined by Oliver Heaviside in February 1886.It is customary to use the symbol L for inductance, possibly in honour of the physicist Heinrich Lenz.

Friday, November 27, 2009

Electromagnetic spectrum

The electromagnetic spectrum is the range of all possible frequencies of electromagnetic radiation. The "electromagnetic spectrum" of an object is the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object.The electromagnetic spectrum extends from below frequencies used for modern radio to gamma radiation at the short-wavelength end, covering wavelengths from thousands of kilometers down to a fraction of the size of an atom. The long wavelength limit is the size of the universe itself, while it is thought that the short wavelength limit is in the vicinity of the Planck length, although in principle the spectrum is infinite and continuous.
Type:-

While the classification scheme is generally accurate, in reality there is often some overlap between neighboring types of electromagnetic energy. For example, radio waves at 60 Hz may be received and studied by astronomers, or may be ducted along wires as electric power.
The distinction between X and gamma rays is based on sources: gamma rays are the photons generated from nuclear decay or other nuclear and /particle process, whereas X-rays are generated by electronic transitions involving highly energetic inner atomic electrons.
Also, the region of the spectrum of the particular electromagnetic radiation is reference-frame dependent (on account of the Doppler shift for light) so EM radiation which one observer would say is in one region of the spectrum could appear to an observer moving at a substantial fraction of the speed of light with respect to the first to be in another part of the spectrum. For example, consider the cosmic microwave background. It was produced, when matter and radiation decoupled, by the de-excitation of hydrogen atoms to the ground state. These photons were from Lyman series transitions, putting them in the ultraviolet (UV) part of the electromagnetic spectrum. Now this radiation has undergone enough cosmological red shift to put it into the microwave region of the spectrum for observers moving slowly (compared to the speed of light) with respect to the cosmos. However, for particles moving near the speed of light, this radiation will be blue-shifted in their rest frame. The highest energy cosmic ray protons are moving such that, in their rest frame, this radiation is to high energy gamma rays which interact with the proton to produce bound pairs . This is the source of the limit.

Sunday, November 22, 2009

Direct current

Direct current (DC) is the unidirectional flow of electric charge. Direct current is produced by such sources as batteries, thermocouples, solar cells, and commutator-type electric machines of the dynamo type. Direct current may flow in a conductor such as a wire, but can also be through semiconductors, insulators, or even through a vacuum as in electron or ion beams. The electric charge flows in a constant direction, distinguishing it from alternating current (AC). A term formerly used for direct current was Galvanic current.Direct current may be obtained from an alternating current supply by use of a current-switching arrangement called a rectifier, which contains electronic elements (usually) or electromechanical elements (historically) that allow current to flow only in one direction. Direct current may be made into alternating current with an inverter or a motor-generator set.
The first commercial electric power transmission (developed by Thomas Edison in the late nineteenth century) used direct current. Because of the advantage of alternating current over direct current in transforming and transmission, electric power distribution today is nearly all alternating current. For applications requiring direct current, such as third rail power systems, alternating current is distributed to a substation, which utilizes a rectifier to convert the power to direct current. See War of Currents. Direct current is used to charge batteries, and in nearly all electronic systems as the power supply. Very large quantities of direct-current power are used in production of aluminum and other electrochemical processes. Direct current is used for some railway propulsion, especially in urban areas. High voltage direct current is used to transmit large amounts of power from remote generation sites or to interconnect alternating current power grids.
Applications:-
Direct-current installations usually have different types of sockets, switches, and fixtures, mostly due to the low voltages used, from those suitable for alternating current. It is usually important with a direct-current appliance not to reverse polarity unless the device has a diode bridge to correct for this (most battery-powered devices do not).DC is commonly found in all low-voltage applications, especially where these are powered by batteries, which can produce only DC, or solar power systems. Most automotive applications use DC, although the alternator is an AC device which uses a rectifier to produce DC. Most electronic circuits require a DC power supply. Applications using fuel cells (mixing hydrogen and oxygen together with a catalyst to produce electricity and water as byproducts) also produce only DC.
Many telephones connect to a twisted pair of wires, and internally separate the AC component of the voltage between the two wires (the audio signal) from the DC component of the voltage between the two wires (used to power the phone).Telephone exchange communication equipment, such as DSLAM, uses standard -48V DC power supply. The negative polarity is achieved by grounding the positive terminal of power supply system and the battery bank. This is done to prevent electrolysis depositions.An electrified third rail can be used to power both underground (subway) and overground trains.

Friday, November 20, 2009

Alternating current

In alternating current (AC, also ac) the movement (or flow) of electric charge periodically reverses direction. An electric charge would for instance move forward, then backward, then forward, then backward, over and over again. In direct current (DC), the movement (or flow) of electric charge is only in one direction.Used generically, AC refers to the form in which electricity is delivered to businesses and residences. The usual waveform of an AC power circuit is a sine wave, however in certain applications, different waveforms are used, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. In these applications, an important goal is often the recovery of information encoded (or modulated) onto the AC signal.
History:-
A power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They also exhibited the invention in Turin in 1884, where it was adopted for an electric lighting system. Many of their designs were adapted to the particular laws governing electrical distribution in the UK.[citation needed]
In 1882, 1884, and 1885 Gaulard and Gibbs applied for patents on their transformer; however, these were overturned due to prior arts of Nikola Tesla and actions initiated by Sebastian Ziani de Ferranti.Ferranti went into this business in 1882 when he set up a shop in London designing various electrical devices. Ferranti believed in the success of alternating current power distribution early on, and was one of the few experts in this system in the UK. In 1887 the London Electric Supply Corporation (LESCo) hired Ferranti for the design of their power station at Deptford. He designed the building, the generating plant and the distribution system. On its completion in 1891 it was the first truly modern power station, supplying high-voltage AC power that was then "stepped down" for consumer use on each street. This basic system remains in use today around the world. Many homes all over the world still have electric meters with the Ferranti AC patent stamped on them.
William Stanley, Jr. designed one of the first practical devices to transfer AC power efficiently between isolated circuits. Using pairs of coils wound on a common iron core, his design, called an induction coil, was an early transformer. The AC power system used today developed rapidly after 1886, and includes key concepts by Nikola Tesla, who subsequently sold his patent to George Westinghouse. Lucien Gaulard, John Dixon Gibbs, Carl Wilhelm Siemens and others contributed subsequently to this field. AC systems overcame the limitations of the direct current system used by Thomas Edison to distribute electricity efficiently over long distances even though Edison attempted to discredit alternating current as too dangerous during the War of Currents.The first commercial power plant in the United States using three-phase alternating current was at the Mill Creek hydroelectric plant near Redlands, California in 1893 designed by Almirian Decker. Decker's design incorporated 10,000-volt three-phase transmission and established the standards for the complete system of generation, transmission and motors used today.The Jaruga power plant in Croatia was set in operation on 28 August 1895, . It was completed three days after the Niagara Falls plant, becoming the second commercial hydro power plant in the world. The two generators (42 Hz, 550 kW each) and the transformers were produced and installed by the Hungarian company Ganz. The transmission line from the power plant to the City of Šibenik was 11.5 kilometers (7.1 mi) long on wooden towers, and the municipal distribution grid 3000 V/110 V included six transforming stations. Alternating current circuit theory evolved rapidly in the latter part of the 19th and early 20th century. Notable contributors to the theoretical basis of alternating current calculations include Charles Steinmetz, James Clerk Maxwell, Oliver Heaviside, and many others. Calculations in unbalanced three-phase systems were simplified by the symmetrical components methods discussed by Charles Legeyt Fortescue in 1918.

Wednesday, November 18, 2009

Light-emitting diode

A light-emitting diode (LED) (pronounced /ˌɛl.iːˈdiː/[1], or just /lɛd/), is an electronic light source. LEDs are used as indicator lamps in many kinds of electronics and increasingly for lighting. LEDs work by the effect of electroluminescence, discovered by accident in 1907. The LED was introduced as a practical electronic component in 1962.All early devices emitted low-intensity red light, but modern LEDs are available across the visible, ultraviolet and infra red wavelengths, with very high brightness.
LEDs are based on the semiconductor diode. When the diode is forward biased (switched on), electrons are able to recombine with holes and energy is released in the form of light. This effect is called electroluminescence and the color of the light is determined by the energy gap of the semiconductor. The LED is usually small in area (less than 1 mm2) with integrated optical components to shape its radiation pattern and assist in reflection.
LEDs present many advantages over traditional light sources including lower energy consumption, longer lifetime, improved robustness, smaller size and faster switching. However, they are relatively expensive and require more precise current and heat management than traditional light sources.
Applications of LEDs are diverse. They are used as low-energy indicators but also for replacements for traditional light sources in general lighting, automotive lighting and traffic signals. The compact size of LEDs has allowed new text and video displays and sensors to be developed, while their high switching rates are useful in communications technology.
History:-
Electroluminescence was discovered in 1907 by the British experimenter H. J. Round of Marconi Labs, using a crystal of silicon carbide and a cat's-whisker detector. Russian Oleg Vladimirovich Losev independently reported on the creation of a LED in 1927. His research was distributed in Russian, German and British scientific journals, but no practical use was made of the discovery for several decades. Rubin Braunstein of the Radio Corporation of America reported on infrared emission from gallium arsenide (GaAs) and other semiconductor alloys in 1955. Braunstein observed infrared emission generated by simple diode structures using gallium antimonide (GaSb), GaAs, indium phosphide (InP), and silicon-germanium (SiGe) alloys at room temperature and at 77 kelvin.
In 1961, experimenters Robert Biard and Gary Pittman working at Texas Instrumentsfound that GaAs emitted infrared radiation when electric current was applied and received the patent for the infrared LED.
The first practical visible-spectrum (red) LED was developed in 1962 by Nick Holonyak Jr., while working at General Electric Company.[2] Holonyak is seen as the "father of the light-emitting diode". M. George Craford a former graduate student of Holonyak, invented the first yellow LED and improved the brightness of red and red-orange LEDs by a factor of ten in 1972.[13] In 1976, T.P. Pearsall created the first high-brightness, high efficiency LEDs for optical fiber telecommunications by inventing new semiconductor materials specifically adapted to optical fiber transmission wavelengths.
Up to 1968 visible and infrared LEDs were extremely costly, on the order of US $200 per unit, and so had little practical application.The Monsanto Company was the first organization to mass-produce visible LEDs, using gallium arsenide phosphide in 1968 to produce red LEDs suitable for indicators. Hewlett Packard (HP) introduced LEDs in 1968, initially using GaAsP supplied by Monsanto. The technology proved to have major applications for alphanumeric displays and was integrated into HP's early handheld calculators.
The first commercial LEDs were commonly used as replacements for incandescent indicators, and in seven-segment displays, first in expensive equipment such as laboratory and electronics test equipment, then later in such appliances as TVs, radios, telephones, calculators, and even watches (see list of signal applications). These red LEDs were bright enough only for use as indicators, as the light output was not enough to illuminate an area. Later, other colors became widely available and also appeared in appliances and equipment. As the LED materials technology became more advanced, the light output was increased, while maintaining the efficiency and the reliability to an acceptable level. The invention and development of the high power white light LED led to use for illumination(see list of illumination applications). Most LEDs were made in the very common 5 mm T1¾ and 3 mm T1 packages, but with increasing power output, it has become increasingly necessary to shed excess heat in order to maintain reliability[19], so more complex packages have been adapted for efficient heat dissipation. Packages for state-of-the-art high power LEDs bear little resemblance to early LEDs.
The first high-brightness blue LED was demonstrated by Shuji Nakamura of Nichia Corporation and was based on InGaN borrowing on critical developments in GaN nucleation on sapphire substrates and the demonstration of p-type doping of GaN which were developed by Isamu Akasaki and H. Amano in Nagoya. In 1995, Alberto Barbieri at the Cardiff University Laboratory (GB) investigated the efficiency and reliability of high-brightness LEDs and demonstrated a very impressive result by using a transparent contact made of indium tin oxide (ITO) on (AlGaInP/GaAs) LED. The existence of blue LEDs and high efficiency LEDs quickly led to the development of the first white LED, which employed a Y3Al5O12:Ce, or "YAG", phosphor coating to mix yellow (down-converted) light with blue to produce light that appears white. Nakamura was awarded the 2006 Millennium Technology Prize for his invention.
The development of LED technology has caused their efficiency and light output to increase exponentially, with a doubling occurring about every 36 months since the 1960s, in a way similar to Moore's law. The advances are generally attributed to the parallel development of other semiconductor technologies and advances in optics and material science. This trend is normally called Haitz's Law after Dr. Roland Haitz. In February 2008, Bilkent university in Turkey reported 300 lumens of visible light per watt luminous efficacy (not per electrical watt) and warm light by using nanocrystals.
In January 2009, researchers from Cambridge University reported a process for growing gallium nitride (GaN) LEDs on silicon. Production costs could be reduced by 90% using six-inch silicon wafers instead of two-inch sapphire wafers. The team was led by Colin Humphreys.

Galvanometer

A galvanometer is a type of ammeter: an instrument for detecting and measuring electric
current. It is an analog electromechanical transducer that produces a rotary deflection of some type of pointer in response to electric current flowing through its coil. The term has expanded to include uses of the same mechanism in recording, positioning, and servomechanism equipment. History:-
The deflection of a magnetic compass needle by current in a wire was first described by Hans Oersted in 1820. The phenomenon was studied both for its own sake and as a means of measuring electrical current. The earliest galvanometer was reported by Johann Schweigger at the University of Halle on 16 September 1820. André-Marie Ampère also contributed to its development. Early designs increased the effect of the magnetic field due to the current by using multiple turns of wire; the instruments were at first called "multipliers" due to this common design feature. The term "galvanometer", in common use by 1836, was derived from the surname of Italian electricity researcher Luigi Galvani, who discovered that electric current could make a frog's leg jerk.
Originally the instruments relied on the Earth's magnetic field to provide the restoring force for the compass needle; these were called "tangent" galvanometers and had to be oriented before use. Later instruments of the "astatic" type used opposing magnets to become independent of the Earth's field and would operate in any orientation. The most sensitive form, the Thompson or mirror galvanometer, was invented by William Thomson (Lord Kelvin). Instead of a compass needle, it used tiny magnets attached to a small lightweight mirror, suspended by a thread; the deflection of a beam of light greatly magnified the deflection due to small currents. Alternatively the deflection of the suspended magnets could be observed directly through a microscope.
The ability to quantitatively measure voltage and current allowed Georg Ohm to formulate Ohm's Law, which states that the voltage across an element is directly proportional to the current through it.
The early moving-magnet form of galvanometer had the disadvantage that it was affected by any magnets or iron masses near it, and its deflection was not linearly proportional to the current. In 1882 Jacques-Arsène d'Arsonval developed a form with a stationary permanent magnet and a moving coil of wire, suspended by coiled hair springs. The concentrated magnetic field and delicate suspension made these instruments sensitive and they could be mounted in any position. By 1888 Edward Weston had brought out a commercial form of this instrument, which became a standard component in electrical equipment. This design is almost universally used in moving-vane meters today.
Opration:-
The most familiar use is as an analog measuring instrument, often called a meter. It is used to measure the direct current (flow of electric charge) through an electric circuit. The D'Arsonval/Weston form used today is constructed with a small pivoting coil of wire in the field of a permanent magnet. The coil is attached to a thin pointer that traverses a calibrated scale. A tiny torsion spring pulls the coil and pointer to the zero position.
When a direct current (DC) flows through the coil, the coil generates a magnetic field. This field acts against the permanent magnet. The coil twists, pushing against the spring, and moves the pointer. The hand points at a scale indicating the electric current. Careful design of the pole pieces ensures that the magnetic field is uniform, so that the angular deflection of the pointer is proportional to the current. A useful meter generally contains provision for damping the mechanical resonance of the moving coil and pointer, so that the pointer settles quickly to its position without oscillation.
The basic sensitivity of a meter might be, for instance, 100 microamperes full scale (with a voltage drop of, say, 50 millivolts at full current). Such meters are often calibrated to read some other quantity that can be converted to a current of that magnitude. The use of current dividers, often called shunts, allows a meter to be calibrated to measure larger currents. A meter can be calibrated as a DC voltmeter if the resistance of the coil is known by calculating the voltage required to generate a full scale current. A meter can be configured to read other voltages by putting it in a voltage divider circuit. This is generally done by placing a resistor in series with the meter coil. A meter can be used to read resistance by placing it in series with a known voltage (a battery) and an adjustable resistor. In a preparatory step, the circuit is completed and the resistor adjusted to produce full scale deflection. When an unknown resistor is placed in series in the circuit the current will be less than full scale and an appropriately calibrated scale can display the value of the previously-unknown resistor.
Because the pointer of the meter is usually a small distance above the scale of the meter, parallax error can occur when the operator attempts to read the scale line that "lines up" with the pointer. To counter this, some meters include a mirror along the markings of the principal scale. The accuracy of the reading from a mirrored scale is improved by positioning one's head while reading the scale so that the pointer and the reflection of the pointer are aligned; at this point, the operator's eye must be directly above the pointer and any parallax error has been minimized.
Types:-
Extremely sensitive measuring equipment once used mirror galvanometers that substituted a mirror for the pointer. A beam of light reflected from the mirror acted as a long, massless pointer. Such instruments were used as receivers for early trans-Atlantic telegraph systems, for instance. The moving beam of light could also be used to make a record on a moving photographic film, producing a graph of current versus time, in a device called an oscillograph.
Today the main type of galvanometer mechanism still used is the moving coil D'Arsonval/Weston mechanism, which is used in traditional analog meters.
Tangent galvanometer:-
A tangent galvanometer is an early measuring instrument used for the measurement of electric current. It works by using a compass needle to compare a magnetic field generated by the unknown current to the magnetic field of the Earth. It gets its name from its operating principle, the tangent law of magnetism, which states that the tangent of the angle a compass needle makes is proportional to the ratio of the strengths of the two perpendicular magnetic fields. It was first described by Claude Pouillet in 1837.
A tangent galvanometer consists of a coil of insulated copper wire wound on a circular non-magnetic frame. The frame is mounted vertically on a horizontal base provided with levelling screws. The coil can be rotated on a vertical axis passing through its centre. A compass box is mounted horizontally at the centre of a circular scale. It consists of a tiny, powerful magnetic needle pivoted at the centre of the coil. The magnetic needle is free to rotate in the horizontal plane. The circular scale is divided into four quadrants. Each quadrant is graduated from 0° to 90°. A long thin aluminium pointer is attached to the needle at its centre and at right angle to it. To avoid errors due to parallax a plane mirror is mounted below the compass needle.
In operation, the instrument is first rotated until the magnetic field of the Earth, indicated by the compass needle, is parallel with the plane of the coil. Then the unknown current is applied to the coil. This creates a second magnetic field on the axis of the coil, perpendicular to the Earth's magnetic field. The compass needle responds to the vector sum of the two fields, and deflects to an angle equal to the tangent of the ratio of the two fields. From the angle read from the compass's scale, the current could be found from a table.
The current supply wires have to be wound in a small helix, like a pig's tail, otherwise the field due to the wire will affect the compass needle and an incorrect reading will be obtained.
uses:-
A major early use for galvanometers was for finding faults in telecommunications cables. They were superseded in this application late in the 20th century by time-domain reflectometers.
Probably the largest use of galvanometers was the D'Arsonval/Weston type movement used in analog meters in electronic equipment. Since the 1980s, galvanometer-type analog meter movements have been displaced by analog to digital converters (ADCs) for some uses. A digital panel meter (DPM) contains an analog to digital converter and numeric display. The advantages of a digital instrument are higher precision and accuracy, but factors such as power consumption or cost may still favor application of analog meter movements.

Sunday, November 15, 2009

Three-phase electric power

Three-phase electric power is a common method of alternating-current electric power transmission.It is a type of polyphase system, and is the most common method used by electric power distribution grids worldwide to distribute power. It is also used to power large motors and other large loads. A three-phase system is generally more economical than others because it uses less conductor material to transmit electric power than equivalent single-phase or two-phase systems at the same voltage.
In a three-phase system, three circuit conductors carry three alternating currents (of the same frequency) which reach their instantaneous peak values at different times. Taking one conductor as the reference, the other two currents are delayed in time by one-third and two-thirds of one cycle of the electrical current. This delay between phases has the effect of giving constant power transfer over each cycle of the current, and also makes it possible to produce a rotating magnetic field in an electric motor.Three-phase systems may or may not have a neutral wire. A neutral wire allows the three-phase system to use a higher voltage while still supporting lower-voltage single-phase appliances. In high-voltage distribution situations, it is common not to have a neutral wire as the loads can simply be connected between phases (phase-phase connection).
Three-phase has properties that make it very desirable in electric power systems:
The phase currents tend to cancel out one another, summing to zero in the case of a linear balanced load. This makes it possible to eliminate or reduce the size of the neutral conductor; all the phase conductors carry the same current and so can be the same size, for a balanced load. Power transfer into a linear balanced load is constant, which helps to reduce generator and motor vibrations. Three-phase systems can produce a magnetic field that rotates in a specified direction, which simplifies the design of electric motors. Three is the lowest phase order to exhibit all of these properties.Most household loads are single-phase. In North America and some other countries, three-phase power generally does not enter homes. Even in areas where it does, it is typically split out at the main distribution board and the individual loads are fed from a single phase. Sometimes it is used to power electric stoves and washing machines.The three phases are typically indicated by colors which vary by country. See the table for more information

Two-phase electrical power

Two-phase electrical power was an early 20th century polyphase alternating current electric power distribution system. Two circuits were used, with voltage phases differing by 90 degrees. Usually circuits used four wires, two for each phase. Less frequently, three wires were used, with a common wire with a larger-diameter conductor. Some early two-phase generators had two complete rotor and field assemblies, with windings physically offset by 90 electrical degrees to provide two-phase power. The generators at Niagara Falls installed in 1895 were the largest generators in the world at the time and were two-phase machines.
The advantage of two-phase electrical power was that it allowed for simple, self-starting electric motors. In the early days of electrical engineering, it was easier to analyze and design two-phase systems where the phases were completely separated. [1] It was not until the invention of the method of symmetrical components in 1918 that polyphase power systems had a convenient mathematical tool for describing unbalanced load cases. The revolving magnetic field produced with a two-phase system allowed electric motors to provide torque from zero motor speed, which was not possible with a single-phase induction motor (without extra starting means). Induction motors designed for two-phase operation use the same winding configuration as capacitor start single-phase motors.
Three-phase electric power requires less conductor mass for the same voltage and overall amount of power, compared with a two-phase four-wire circuit of the same carrying capacity. It has all but replaced two-phase power for commercial distribution of electrical energy, but two-phase circuits are still found in certain control systems.
Two-phase circuits typically use two separate pairs of current-carrying conductors. Alternatively, three wires may be used, but the common conductor carries the vector sum of the phase currents, which requires a larger conductor. Three phase can share conductors so that the three phases can be carried on three conductors of the same size. In electrical power distribution, a requirement of only three conductors rather than four represented a considerable distribution-wire cost savings due to the expense of conductors and installation.
Two-phase power can be derived from a three-phase source using two transformers in a Scott connection. One transformer primary is connected across two phases of the supply. The second transformer is connected to a center-tap of the first transformer, and is wound for 86.6% of the phase-to-phase voltage on the 3-phase system. The secondaries of the transformers will have two phases 90 degrees apart in time, and a balanced two-phase load will be evenly balanced over the three supply phases.Three-wire, 120/240 volt single phase power used in the USA and Canada is sometimes incorrectly called "two-phase". The proper term is split phase or 3-wire single-phase.

Single-phase electric power

In electrical engineering, single-phase electric power refers to the distribution of alternating current electric power using a system in which all the voltages of the supply vary in unison. Single-phase distribution is used when loads are mostly lighting and heating, with few large electric motors. A single-phase supply connected to an alternating current electric motor does not produce a revolving magnetic field; single-phase motors need additional circuits for starting, and such motors are uncommon above 10 or 20 kW in rating.
In contrast, in a three-phase system, the currents in each conductor reach their peak instantaneous values sequentially, not simultaneously; in each cycle of the power frequency, first one, then the second, then the third current reaches its maximum value. The waveforms of the three supply conductors are offset from one another in time (delayed in phase) by one-third of their period.Standard frequencies of single-phase power systems are either 50 or 60 Hz. Special single-phase traction power networks may operate at 16.67 Hz or other frequencies to power electric railways.
Splitting out:-
No arrangement of transformers can convert a single-phase load into a balanced load on a polyphase system. A single-phase load may be powered from a three-phase distribution system either by connection between a phase and neutral or by connecting the load between two phases. The load device must be designed for the voltage in each case. The neutral point in a three phase system exists at the mathematical center of an equilateral triangle formed by the three phase points, and the phase-to-phase voltage is accordingly times the phase-to-neutral voltage.[1] For example, in places using a 415 volt 3 phase system, the phase-to-neutral voltage is 240 volts, allowing single-phase lighting to be connected phase-to-neutral and three-phase motors to be connected to all three phases.
In North America, a typical three-phase system will have 208 volts between the phases and 120 volts between phase and neutral. If heating equipment designed for the 240-volt three-wire single phase system is connected to two phases of a 208 volt supply, it will only produce 75% of its rated heating effect. Single-phase motors may have taps to allow their use on either 208 V or 240 V supplies.
On higher voltage systems (kilovolts) where a single phase transformer is in use to supply a low voltage system the method of splitting varies. In North America utility distribution practice, the primary of the step-down transformer is wired across a single high voltage feed wire and neutral, at least for smaller supplies (see photo of transformer on right). Rural distribution may be a single phase at a medium voltage; in some areas single wire earth return distribution is used when customers are very far apart. In Britain the step-down primary is wired phase-phase.
Applications:-

Single-phase power distribution is widely used especially in rural areas, where the cost of a three-phase distribution network is high and motor loads are small and uncommon.
High power systems, say, hundreds of kVA or larger, are nearly always three phase. The largest supply normally available as single phase varies according to the standards of the electrical utility. In the UK a single-phase household supply may be rated 100 A or even 125 A, meaning that there is little need for 3 phase in a domestic or small commercial environment. Much of the rest of Europe has traditionally had much smaller limits on the size of single phase supplies resulting in even houses being supplied with 3 phase (in urban areas with three-phase supply networks).
In North America, individual residences and small commercial buildings with services up to about 100 kV·A (417 amperes at 240 volts) will usually have three-wire single-phase distribution, often with only one customer per distribution transformer. In exceptional cases larger single-phase three-wire services can be provided, usually only in remote areas where polyphase distribution is not available. In rural areas farmers who wish to use three-phase motors may install a phase converter if only a single-phase supply is available. Larger consumers such as large buildings, shopping centres, factories, office blocks, and multiple-unit apartment blocks will have three-phase service. In densely-populated areas of cities, network power distribution is used with many customers and many supply transformers connected to provide hundreds or thousands of kV·A load concentrated over a few hundred square metres.
Three-wire single-phase systems are rarely used in the UK where large loads are needed off only two high voltage phases.
Single-phase power may be used for electric railways; the largest single-phase generator in the world, at Neckarwestheim Nuclear Power Plant, supplies a railway system on a dedicated traction power network.

Electric field

In physics, the space surrounding an electric charge or in the presence of a time-varying magnetic field has a property called an electric field. This electric field exerts a force on other electrically charged objects. The concept of an electric field was introduced by Michael Faraday.
The electric field is a vector field with SI units of newtons per coulomb (N C−1) or, equivalently, volts per metre (V m−1). The SI base units of the electric field are kg·m·s−3·A−1. The strength of the field at a given point is defined as the force that would be exerted on a positive test charge of +1 coulomb placed at that point; the direction of the field is given by the direction of that force. Electric fields contain electrical energy with energy density proportional to the square of the field intensity. The electric field is to charge as gravitational acceleration is to mass and force density is to volume.A moving charge has not just an electric field but also a magnetic field, and in general the electric and magnetic fields are not completely separate phenomena; what one observer perceives as an electric field, another observer in a different frame of reference perceives as a mixture of electric and magnetic fields. For this reason, one speaks of "electromagnetism" or "electromagnetic fields." In quantum mechanics, disturbances in the electromagnetic fields are called photons, and the energy of photons is quantized.

Tuesday, November 10, 2009

Wattmeter


The wattmeter is an instrument for measuring the electric power (or the supply rate of electrical energy) in watts of any given circuit.
Electrodynamic:-
The traditional analog wattmeter is an electrodynamic instrument. The device consists of a pair of fixed coils, known as current coils, and a movable coil known as the potential coil.
The current coils connected in series with the circuit, while the potential coil is connected in parallel. Also, on analog wattmeters, the potential coil carries a needle that moves over a scale to indicate the measurement. A current flowing through the current coil generates an electromagnetic field around the coil. The strength of this field is proportional to the line current and in phase with it. The potential coil has, as a general rule, a high-value resistor connected in series with it to reduce the current that flows through it.
The result of this arrangement is that on a dc circuit, the deflection of the needle is proportional to both the current and the voltage, thus conforming to the equation W=VA or P=VI. On an ac circuit the deflection is proportional to the average instantaneous product of voltage and current, thus measuring true power, and possibly (depending on load characteristics) showing a different reading to that obtained by simply multiplying the readings showing on a stand-alone voltmeter and a stand-alone ammeter in the same circuit.
The two circuits of a wattmeter can be damaged by excessive current. The ammeter and voltmeter are both vulnerable to overheating — in case of an overload, their pointers will be driven off scale — but in the wattmeter, either or even both the current and potential circuits can overheat without the pointer approaching the end of the scale! This is because the position of the pointer depends on the power factor, voltage and current. Thus, a circuit with a low power factor will give a low reading on the wattmeter, even when both of its circuits are loaded to the maximum safety limit. Therefore, a wattmeter is rated not only in watts, but also in volts and amperes.
Electrodynamometer:-
An early current meter was the electrodynamometer. Used in the early 20th century, the Siemens electrodynamometer, for example, is a form of an electrodynamic ammeter, that has a fixed coil which is surrounded by another coil having its axis at right angles to that of the fixed coil. This second coil is suspended by a number of silk fibres, and to the coil is also attached a spiral spring the other end of which is fastened to a torsion head. If then the torsion head is twisted, the suspended coil experiences a torque and is displaced through and angle equal to that of the torsion head. The current can be passed into and out of the movable coil by permitting the ends of the coil to dip into two mercury cups.
If a current is passed through the fixed coil and movable coil in series with one another, the movable coil tends to displace itself so as to bring the axes of the coils, which are normally at right angles, more into the same direction. This tendency can be resisted by giving a twist to the torsion head and so applying to the movable coil through the spring a restoring torque, which opposes the torque due to the dynamic action of the currents. If then the torsion head is provided with an index needle, and also if the movable coil is provided with an indicating point, it is possible to measure the torsional angle through which the head must be twisted to bring the movable coil back to its zero position. In these circumstances, the torsional angle becomes a measure of the torque and therefore of the product of the strengths of the currents in the two coils, that is to say, of the square of the strength of the current passing through the two coils if they are joined up in series. The instrument can therefore be graduated by passing through it known and measured continuous currents, and it then becomes available for use with either continuous or alternating currents. The instrument can be provided with a curve or table showing the current corresponding to each angular displacement of the torsion head.


Integrated circuit

In electronics, an integrated circuit (also known as IC, microcircuit, microchip, silicon chip, or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as passive components) that has been manufactured in the surface of a thin substrate of semiconductor material. Integrated circuits are used in almost all electronic equipment in use today and have revolutionized the world of electronics.
A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual semiconductor devices, as well as passive components, bonded to a substrate or circuit board.
Introduction:-
Integrated circuits were made possible by experimental discoveries which showed that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.
There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography and not constructed one transistor at a time. Furthermore, much less material is used to construct a circuit as a packaged IC die than as a discrete circuit. Performance is high since the components switch quickly and consume little power (compared to their discrete counterparts) because the components are small and close together. As of 2006, chip areas range from a few square millimeters to around 350 mm2, with up to 1 million transistors per mm2.
Invantion:-
The idea of an integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence, Geoffrey W.A. Dummer (1909-2002), who published it at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952. He gave many symposia publicly to propagate his ideas.
Dummer unsuccessfully attempted to build such a circuit in 1956.
The integrated circuit can be credited as being invented by both Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor [3] working independently of each other. Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the first working integrated circuit on September 12, 1958. In his patent application of February 6, 1959, Kilby described his new device as “a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.”
Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit. Robert Noyce also came up with his own idea of integrated circuit, half a year later than Kilby. Noyce's chip had solved many practical problems that the microchip developed by Kilby had not. Noyce's chip, made at Fairchild, was made of silicon, whereas Kilby's chip was made of germanium.
Early developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate arranged in a 2-stage amplifier arrangement. Jacobi discloses small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.
A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.
The aforementioned Noyce credited Kurt Lehovec of Sprague Electric for the principle of p-n junction isolation caused by the action of a biased p-n junction (the diode) as a key concept behind the IC.

History of electrical engineering

This article details the history of electrical engineering. Topics also included are the general developments and notable individuals within the electrical engineering profession.
History:-
Thales of Miletus, an ancient greek philosopher, writing at around 600 BCE, described a form of static electricity, noting that rubbing fur on various substances, such as amber, would cause a particular attraction between the two. He noted that the amber buttons could attract light objects such as hair and that if they rubbed the amber for long enough they could even get a spark to jump.At around 450 B.C. Democritus, a later Greek philosopher, developed an atomic theory that was remarkably similar to our modern atomic theory. His mentor, Leucippus, is credited with this same theory. The hypothesis of Leucippus and Democritus held everything to be composed of atoms. But these atoms, called "atomos", were indivisible, and indestructible. He presciently stated that between atoms lies empty space, and that atoms are constantly in motion. He was incorrect only in stating that atoms come different sizes and shapes. Each object had its own shaped and sized atom.
An object found in Iraq in 1938, dated to about 250 BCE and called the Baghdad Battery, resembles a galvanic cell and is believed by some to have been used for electroplating in Mesopotamia, although this has not yet been proven.
Thales of Miletus, an ancient greek philosopher, writing at around 600 BCE, described a form of static electricity, noting that rubbing fur on various substances, such as amber, would cause a particular attraction between the two. He noted that the amber buttons could attract light objects such as hair and that if they rubbed the amber for long enough they could even get a spark to jump.
At around 450 B.C. Democritus, a later Greek philosopher, developed an atomic theory that was remarkably similar to our modern atomic theory. His mentor, Leucippus, is credited with this same theory. The hypothesis of Leucippus and Democritus held everything to be composed of atoms. But these atoms, called "atomos", were indivisible, and indestructible. He presciently stated that between atoms lies empty space, and that atoms are constantly in motion. He was incorrect only in stating that atoms come different sizes and shapes. Each object had its own shaped and sized atom.
An object found in Iraq in 1938, dated to about 250 BCE and called the Baghdad Battery, resembles a galvanic cell and is believed by some to have been used for electroplating in Mesopotamia, although this has not yet been proven.
During the latter part of the 1800s, the study of electricity was largely considered to be a subfield of physics. It was not until the late 19th century that universities started to offer degrees in electrical engineering. In 1882, Darmstadt University of Technology founded the first chair and the first faculty of electrical engineering worldwide. In the same year, under Professor Charles Cross, the Massachusetts Institute of Technology began offering the first option of Electrical Engineering within a physics department. In 1883, Darmstadt University of Technology and Cornell University introduced the world's first courses of study in electrical engineering and in 1885 the University College London founded the first chair of electrical engineering in the United Kingdom. The University of Missouri subsequently established the first department of electrical engineering in the United States in 1886.
During this period work in the area increased dramatically. In 1882 Edison switched on the world's first large-scale electrical supply network that provided 110 volts direct current to fifty-nine customers in lower Manhattan. In 1887 Nikola Tesla filed a number of patents related to a competing form of power distribution known as alternating current. In the following years a bitter rivalry between Tesla and Edison, known as the "War of Currents", took place over the preferred method of distribution. AC eventually replaced DC for generation and power distribution, enormously extending the range and improving the safety and efficiency of power distribution.The efforts of the two did much to further electrical engineering—Tesla's work on induction motors and polyphase systems influenced the field for years to come, while Edison's work on telegraphy and his development of the stock ticker proved lucrative for his company, which ultimately became General Electric.However, by the end of the 19th century, other key figures in the progress of electrical engineering were beginning to emerge.Charles Proteus Steinmetz helped foster the development of alternating current that made possible the expansion of the electric power industry in the United States, formulating mathematical theories for engineers.

Monday, November 9, 2009

Hybrid electric vehicle

A hybrid electric vehicle (HEV) combines a conventional internal combustion engine propulsion system with an electric propulsion system. The presence of the electric powertrain is intended to achieve either better fuel economy than a conventional vehicle, or better performance. A variety of types of HEV exist, and the degree to which they function as EVs varies as well. The most common form of HEV is the hybrid electric car, although hybrid electric trucks (pickups and tractors) also exist.
Modern HEVs make use of efficiency-improving technologies such as regenerative braking, which converts the vehicle's kinetic energy into battery-replenishing electric energy, rather than wasting it as heat energy as conventional brakes do. Some varieties of HEVs use their internal combustion engine to generate electricity by spinning an electrical generator (this combination is known as a motor-generator), to either recharge their batteries or to directly power the electric drive motors. Many HEVs reduce idle emissions by shutting down the ICE at idle and restarting it when needed; this is known as a start-stop system. A hybrid-electric produces less emissions from its ICE than a comparably-sized gasoline car, as an HEV's gasoline engine is usually smaller than a pure fossil-fuel vehicle, and if not used to directly drive the car, can be geared to run at maximum efficiency, further improving fuel economy.
The hybrid-electric vehicle did not become widely available until the release of the Toyota Prius in Japan in 1997, followed by the Honda Insight in 1999. While initially perceived as unnecessary due to the low cost of gasoline, worldwide increases in the price of petroleum caused many automakers to release hybrids in the late 2000s; they are now perceived as a core segment of the automotive market of the future.Worldwide sales of hybrid vehicles produced by Toyota reached 1.0 million vehicles by May 31, 2007, and the 2.0 million mark was reached by August 31, 2009, with hybrids sold in 50 countries. Worldwide sales are led by the Prius, with cumulative sales of 1.43 million by Augut 2009. The second-generation Honda Insight was the top-selling vehicle in Japan in April 2009, marking the first occasion that an HEV has received the distinction. American automakers have made development of hybrid cars a top priority.
History:-
In 1901, while employed at Lohner Coach Factory, Ferdinand Porsche designed the Mixte, a 4WD series-hybrid version of "System Lohner-Porsche" electric carriage previously appeared in 1900 Paris Salon. The Mixte included a pair of generators driven by 2.5-hp Daimler IC engines to extend operating range. The Mixte broke several Austrian speed records, and also won the Exelberg Rally in 1901 with Porsche himself driving. The Mixte used a gasoline engine powering a generator, which in turn powered electric hub motors, with a small battery pack for reliability. It had a range of 50 km, a top speed of 50 km/h and a power of 5.22 kW during 20 minutes.
In 1905, H. Piper filed a US patent application for a hybrid vehicle.
The 1915 Dual Power, made by the Woods Motor Vehicle electric car maker, had a four-cylinder ICE and an electric motor. Below 15 mph (25 km/h) the electric motor alone drove the vehicle, drawing power from a battery pack, and above this speed the "main" engine cut in to take the car up to its 35 mph (55 km/h) top speed. About 600 were made up to 1918.
The first gasoline-electric hybrid car was released by the Woods Motor Vehicle Company of Chicago in 1917. The hybrid was a commercial failure, proving to be too slow for its price, and too difficult to service.
In 1931 Erich Gaichen invented and drove from Altenburg to Berlin a 1/2 horsepower electric car containing features later incorporated into hybrid cars. Its maximum speed was 25 miles per hour (40 km/h), but it was licensed by the Motor Transport Office, taxed by the German Revenue Department and patented by the German Reichs-Patent Amt. The car battery was re-charged by the motor when the car went downhill. Additional power to charge the battery was provided by a cylinder of compressed air which was re-charged by small air pumps activated by vibrations of the chassis and the brakes and by igniting oxyhydrogen gas. An account of the car and his characterization as a "crank inventor" can be found in Arthur Koestler's autobiography, Arrow in the Blue, pages 269-271, which summarize a contemporaneous newspaper account written by Koestler. No production beyond the prototype was reported.
Current technologey:-
A more recent working prototype of the HEV was built by Victor Wouk (one of the scientists involved with the Henney Kilowatt, the first transistor-based electric car). Wouk's work with HEVs in the 1960s and 1970s earned him the title as the "Godfather of the Hybrid".Wouk installed a prototype hybrid drivetrain (with a 16 kW electric motor) into a 1972 Buick Skylark provided by GM for the 1970 Federal Clean Car Incentive Program, but the program was stopped by the United States Environmental Protection Agency (EPA) in 1976 while Eric Stork, the head of the EPA at the time, was accused of a prejudicial coverup.
The regenerative braking system, the core design concept of most production HEVs, was developed by electrical engineer David Arthurs around 1978 using off-the shelf components and an Opel GT. However the voltage controller to link the batteries, motor (a jet-engine starter motor), and DC generator was Arthurs'. The vehicle exhibited 75 miles per US gallon (3.1 L/100 km; 90 mpg-imp) fuel efficiency and plans for it (as well as somewhat updated versions) are still available through the Mother Earth News web site. The Mother Earth News' own 1980 version claimed nearly 84 miles per US gallon (2.8 L/100 km; 101 mpg-imp).
In 1989, Audi produced its first iteration of the Audi Duo (or Audi 100 Avant duo) experimental vehicle, a plug-in parallel hybrid based on the Audi 100 Avant quattro. This car had a 12.6 bhp Siemens electric motor which drove the rear wheels. A trunk-mounted nickel-cadmium battery supplied energy to the motor that drove the rear wheels. The vehicle's front wheels were powered by a 2.3-litre five-cylinder engine with an output of 136 bhp (101 kW). The intent was to produce a vehicle which could operate on the engine in the country and electric mode in the city. Mode of operation could be selected by the driver. Just ten vehicles are believed to have been made; one drawback was that due to the extra weight of the electric drive, the vehicles were less efficient when running on their engines alone than standard Audi 100s with the same engine.
Two years later, Audi, unveiled the second duo generation - likewise based on the Audi 100 Avant quattro. Once again this featured an electric motor, a 28.6 bhp (21.3 kW) three-phase machine, driving the rear wheels. This time, however, the rear wheels were additionally powered via the Torsen differential from the main engine compartment, which housed a 2.0-litre four-cylinder engine.
The Bill Clinton administration initiated the Partnership for a New Generation of Vehicles (PNGV) program on 29 September 1993 that involved Chrysler, Ford, General Motors, USCAR, the DoE, and other various governmental agencies to engineer the next efficient and clean vehicle. The NRC cited automakers’ moves to produce HEVs as evidence that technologies developed under PNGV were being rapidly adopted on production lines, as called for under Goal 2. Based on information received from automakers, NRC reviewers questioned whether the “Big Three” would be able to move from the concept phase to cost effective, pre-production prototype vehicles by 2004, as set out in Goal 3. The program was replaced by the hydrogen-focused FreedomCAR initiative by the George W. Bush administration in 2001, an initiative to fund research too risky for the private sector to engage in, with the long-term goal of developing effectively carbon emission- and petroleum-free vehicles.