Search Engine

Friday, November 27, 2009

Electromagnetic spectrum

The electromagnetic spectrum is the range of all possible frequencies of electromagnetic radiation. The "electromagnetic spectrum" of an object is the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object.The electromagnetic spectrum extends from below frequencies used for modern radio to gamma radiation at the short-wavelength end, covering wavelengths from thousands of kilometers down to a fraction of the size of an atom. The long wavelength limit is the size of the universe itself, while it is thought that the short wavelength limit is in the vicinity of the Planck length, although in principle the spectrum is infinite and continuous.
Type:-

While the classification scheme is generally accurate, in reality there is often some overlap between neighboring types of electromagnetic energy. For example, radio waves at 60 Hz may be received and studied by astronomers, or may be ducted along wires as electric power.
The distinction between X and gamma rays is based on sources: gamma rays are the photons generated from nuclear decay or other nuclear and /particle process, whereas X-rays are generated by electronic transitions involving highly energetic inner atomic electrons.
Also, the region of the spectrum of the particular electromagnetic radiation is reference-frame dependent (on account of the Doppler shift for light) so EM radiation which one observer would say is in one region of the spectrum could appear to an observer moving at a substantial fraction of the speed of light with respect to the first to be in another part of the spectrum. For example, consider the cosmic microwave background. It was produced, when matter and radiation decoupled, by the de-excitation of hydrogen atoms to the ground state. These photons were from Lyman series transitions, putting them in the ultraviolet (UV) part of the electromagnetic spectrum. Now this radiation has undergone enough cosmological red shift to put it into the microwave region of the spectrum for observers moving slowly (compared to the speed of light) with respect to the cosmos. However, for particles moving near the speed of light, this radiation will be blue-shifted in their rest frame. The highest energy cosmic ray protons are moving such that, in their rest frame, this radiation is to high energy gamma rays which interact with the proton to produce bound pairs . This is the source of the limit.

Sunday, November 22, 2009

Direct current

Direct current (DC) is the unidirectional flow of electric charge. Direct current is produced by such sources as batteries, thermocouples, solar cells, and commutator-type electric machines of the dynamo type. Direct current may flow in a conductor such as a wire, but can also be through semiconductors, insulators, or even through a vacuum as in electron or ion beams. The electric charge flows in a constant direction, distinguishing it from alternating current (AC). A term formerly used for direct current was Galvanic current.Direct current may be obtained from an alternating current supply by use of a current-switching arrangement called a rectifier, which contains electronic elements (usually) or electromechanical elements (historically) that allow current to flow only in one direction. Direct current may be made into alternating current with an inverter or a motor-generator set.
The first commercial electric power transmission (developed by Thomas Edison in the late nineteenth century) used direct current. Because of the advantage of alternating current over direct current in transforming and transmission, electric power distribution today is nearly all alternating current. For applications requiring direct current, such as third rail power systems, alternating current is distributed to a substation, which utilizes a rectifier to convert the power to direct current. See War of Currents. Direct current is used to charge batteries, and in nearly all electronic systems as the power supply. Very large quantities of direct-current power are used in production of aluminum and other electrochemical processes. Direct current is used for some railway propulsion, especially in urban areas. High voltage direct current is used to transmit large amounts of power from remote generation sites or to interconnect alternating current power grids.
Applications:-
Direct-current installations usually have different types of sockets, switches, and fixtures, mostly due to the low voltages used, from those suitable for alternating current. It is usually important with a direct-current appliance not to reverse polarity unless the device has a diode bridge to correct for this (most battery-powered devices do not).DC is commonly found in all low-voltage applications, especially where these are powered by batteries, which can produce only DC, or solar power systems. Most automotive applications use DC, although the alternator is an AC device which uses a rectifier to produce DC. Most electronic circuits require a DC power supply. Applications using fuel cells (mixing hydrogen and oxygen together with a catalyst to produce electricity and water as byproducts) also produce only DC.
Many telephones connect to a twisted pair of wires, and internally separate the AC component of the voltage between the two wires (the audio signal) from the DC component of the voltage between the two wires (used to power the phone).Telephone exchange communication equipment, such as DSLAM, uses standard -48V DC power supply. The negative polarity is achieved by grounding the positive terminal of power supply system and the battery bank. This is done to prevent electrolysis depositions.An electrified third rail can be used to power both underground (subway) and overground trains.

Friday, November 20, 2009

Alternating current

In alternating current (AC, also ac) the movement (or flow) of electric charge periodically reverses direction. An electric charge would for instance move forward, then backward, then forward, then backward, over and over again. In direct current (DC), the movement (or flow) of electric charge is only in one direction.Used generically, AC refers to the form in which electricity is delivered to businesses and residences. The usual waveform of an AC power circuit is a sine wave, however in certain applications, different waveforms are used, such as triangular or square waves. Audio and radio signals carried on electrical wires are also examples of alternating current. In these applications, an important goal is often the recovery of information encoded (or modulated) onto the AC signal.
History:-
A power transformer developed by Lucien Gaulard and John Dixon Gibbs was demonstrated in London in 1881, and attracted the interest of Westinghouse. They also exhibited the invention in Turin in 1884, where it was adopted for an electric lighting system. Many of their designs were adapted to the particular laws governing electrical distribution in the UK.[citation needed]
In 1882, 1884, and 1885 Gaulard and Gibbs applied for patents on their transformer; however, these were overturned due to prior arts of Nikola Tesla and actions initiated by Sebastian Ziani de Ferranti.Ferranti went into this business in 1882 when he set up a shop in London designing various electrical devices. Ferranti believed in the success of alternating current power distribution early on, and was one of the few experts in this system in the UK. In 1887 the London Electric Supply Corporation (LESCo) hired Ferranti for the design of their power station at Deptford. He designed the building, the generating plant and the distribution system. On its completion in 1891 it was the first truly modern power station, supplying high-voltage AC power that was then "stepped down" for consumer use on each street. This basic system remains in use today around the world. Many homes all over the world still have electric meters with the Ferranti AC patent stamped on them.
William Stanley, Jr. designed one of the first practical devices to transfer AC power efficiently between isolated circuits. Using pairs of coils wound on a common iron core, his design, called an induction coil, was an early transformer. The AC power system used today developed rapidly after 1886, and includes key concepts by Nikola Tesla, who subsequently sold his patent to George Westinghouse. Lucien Gaulard, John Dixon Gibbs, Carl Wilhelm Siemens and others contributed subsequently to this field. AC systems overcame the limitations of the direct current system used by Thomas Edison to distribute electricity efficiently over long distances even though Edison attempted to discredit alternating current as too dangerous during the War of Currents.The first commercial power plant in the United States using three-phase alternating current was at the Mill Creek hydroelectric plant near Redlands, California in 1893 designed by Almirian Decker. Decker's design incorporated 10,000-volt three-phase transmission and established the standards for the complete system of generation, transmission and motors used today.The Jaruga power plant in Croatia was set in operation on 28 August 1895, . It was completed three days after the Niagara Falls plant, becoming the second commercial hydro power plant in the world. The two generators (42 Hz, 550 kW each) and the transformers were produced and installed by the Hungarian company Ganz. The transmission line from the power plant to the City of Šibenik was 11.5 kilometers (7.1 mi) long on wooden towers, and the municipal distribution grid 3000 V/110 V included six transforming stations. Alternating current circuit theory evolved rapidly in the latter part of the 19th and early 20th century. Notable contributors to the theoretical basis of alternating current calculations include Charles Steinmetz, James Clerk Maxwell, Oliver Heaviside, and many others. Calculations in unbalanced three-phase systems were simplified by the symmetrical components methods discussed by Charles Legeyt Fortescue in 1918.

Wednesday, November 18, 2009

Light-emitting diode

A light-emitting diode (LED) (pronounced /ˌɛl.iːˈdiː/[1], or just /lɛd/), is an electronic light source. LEDs are used as indicator lamps in many kinds of electronics and increasingly for lighting. LEDs work by the effect of electroluminescence, discovered by accident in 1907. The LED was introduced as a practical electronic component in 1962.All early devices emitted low-intensity red light, but modern LEDs are available across the visible, ultraviolet and infra red wavelengths, with very high brightness.
LEDs are based on the semiconductor diode. When the diode is forward biased (switched on), electrons are able to recombine with holes and energy is released in the form of light. This effect is called electroluminescence and the color of the light is determined by the energy gap of the semiconductor. The LED is usually small in area (less than 1 mm2) with integrated optical components to shape its radiation pattern and assist in reflection.
LEDs present many advantages over traditional light sources including lower energy consumption, longer lifetime, improved robustness, smaller size and faster switching. However, they are relatively expensive and require more precise current and heat management than traditional light sources.
Applications of LEDs are diverse. They are used as low-energy indicators but also for replacements for traditional light sources in general lighting, automotive lighting and traffic signals. The compact size of LEDs has allowed new text and video displays and sensors to be developed, while their high switching rates are useful in communications technology.
History:-
Electroluminescence was discovered in 1907 by the British experimenter H. J. Round of Marconi Labs, using a crystal of silicon carbide and a cat's-whisker detector. Russian Oleg Vladimirovich Losev independently reported on the creation of a LED in 1927. His research was distributed in Russian, German and British scientific journals, but no practical use was made of the discovery for several decades. Rubin Braunstein of the Radio Corporation of America reported on infrared emission from gallium arsenide (GaAs) and other semiconductor alloys in 1955. Braunstein observed infrared emission generated by simple diode structures using gallium antimonide (GaSb), GaAs, indium phosphide (InP), and silicon-germanium (SiGe) alloys at room temperature and at 77 kelvin.
In 1961, experimenters Robert Biard and Gary Pittman working at Texas Instrumentsfound that GaAs emitted infrared radiation when electric current was applied and received the patent for the infrared LED.
The first practical visible-spectrum (red) LED was developed in 1962 by Nick Holonyak Jr., while working at General Electric Company.[2] Holonyak is seen as the "father of the light-emitting diode". M. George Craford a former graduate student of Holonyak, invented the first yellow LED and improved the brightness of red and red-orange LEDs by a factor of ten in 1972.[13] In 1976, T.P. Pearsall created the first high-brightness, high efficiency LEDs for optical fiber telecommunications by inventing new semiconductor materials specifically adapted to optical fiber transmission wavelengths.
Up to 1968 visible and infrared LEDs were extremely costly, on the order of US $200 per unit, and so had little practical application.The Monsanto Company was the first organization to mass-produce visible LEDs, using gallium arsenide phosphide in 1968 to produce red LEDs suitable for indicators. Hewlett Packard (HP) introduced LEDs in 1968, initially using GaAsP supplied by Monsanto. The technology proved to have major applications for alphanumeric displays and was integrated into HP's early handheld calculators.
The first commercial LEDs were commonly used as replacements for incandescent indicators, and in seven-segment displays, first in expensive equipment such as laboratory and electronics test equipment, then later in such appliances as TVs, radios, telephones, calculators, and even watches (see list of signal applications). These red LEDs were bright enough only for use as indicators, as the light output was not enough to illuminate an area. Later, other colors became widely available and also appeared in appliances and equipment. As the LED materials technology became more advanced, the light output was increased, while maintaining the efficiency and the reliability to an acceptable level. The invention and development of the high power white light LED led to use for illumination(see list of illumination applications). Most LEDs were made in the very common 5 mm T1¾ and 3 mm T1 packages, but with increasing power output, it has become increasingly necessary to shed excess heat in order to maintain reliability[19], so more complex packages have been adapted for efficient heat dissipation. Packages for state-of-the-art high power LEDs bear little resemblance to early LEDs.
The first high-brightness blue LED was demonstrated by Shuji Nakamura of Nichia Corporation and was based on InGaN borrowing on critical developments in GaN nucleation on sapphire substrates and the demonstration of p-type doping of GaN which were developed by Isamu Akasaki and H. Amano in Nagoya. In 1995, Alberto Barbieri at the Cardiff University Laboratory (GB) investigated the efficiency and reliability of high-brightness LEDs and demonstrated a very impressive result by using a transparent contact made of indium tin oxide (ITO) on (AlGaInP/GaAs) LED. The existence of blue LEDs and high efficiency LEDs quickly led to the development of the first white LED, which employed a Y3Al5O12:Ce, or "YAG", phosphor coating to mix yellow (down-converted) light with blue to produce light that appears white. Nakamura was awarded the 2006 Millennium Technology Prize for his invention.
The development of LED technology has caused their efficiency and light output to increase exponentially, with a doubling occurring about every 36 months since the 1960s, in a way similar to Moore's law. The advances are generally attributed to the parallel development of other semiconductor technologies and advances in optics and material science. This trend is normally called Haitz's Law after Dr. Roland Haitz. In February 2008, Bilkent university in Turkey reported 300 lumens of visible light per watt luminous efficacy (not per electrical watt) and warm light by using nanocrystals.
In January 2009, researchers from Cambridge University reported a process for growing gallium nitride (GaN) LEDs on silicon. Production costs could be reduced by 90% using six-inch silicon wafers instead of two-inch sapphire wafers. The team was led by Colin Humphreys.

Galvanometer

A galvanometer is a type of ammeter: an instrument for detecting and measuring electric
current. It is an analog electromechanical transducer that produces a rotary deflection of some type of pointer in response to electric current flowing through its coil. The term has expanded to include uses of the same mechanism in recording, positioning, and servomechanism equipment. History:-
The deflection of a magnetic compass needle by current in a wire was first described by Hans Oersted in 1820. The phenomenon was studied both for its own sake and as a means of measuring electrical current. The earliest galvanometer was reported by Johann Schweigger at the University of Halle on 16 September 1820. André-Marie Ampère also contributed to its development. Early designs increased the effect of the magnetic field due to the current by using multiple turns of wire; the instruments were at first called "multipliers" due to this common design feature. The term "galvanometer", in common use by 1836, was derived from the surname of Italian electricity researcher Luigi Galvani, who discovered that electric current could make a frog's leg jerk.
Originally the instruments relied on the Earth's magnetic field to provide the restoring force for the compass needle; these were called "tangent" galvanometers and had to be oriented before use. Later instruments of the "astatic" type used opposing magnets to become independent of the Earth's field and would operate in any orientation. The most sensitive form, the Thompson or mirror galvanometer, was invented by William Thomson (Lord Kelvin). Instead of a compass needle, it used tiny magnets attached to a small lightweight mirror, suspended by a thread; the deflection of a beam of light greatly magnified the deflection due to small currents. Alternatively the deflection of the suspended magnets could be observed directly through a microscope.
The ability to quantitatively measure voltage and current allowed Georg Ohm to formulate Ohm's Law, which states that the voltage across an element is directly proportional to the current through it.
The early moving-magnet form of galvanometer had the disadvantage that it was affected by any magnets or iron masses near it, and its deflection was not linearly proportional to the current. In 1882 Jacques-Arsène d'Arsonval developed a form with a stationary permanent magnet and a moving coil of wire, suspended by coiled hair springs. The concentrated magnetic field and delicate suspension made these instruments sensitive and they could be mounted in any position. By 1888 Edward Weston had brought out a commercial form of this instrument, which became a standard component in electrical equipment. This design is almost universally used in moving-vane meters today.
Opration:-
The most familiar use is as an analog measuring instrument, often called a meter. It is used to measure the direct current (flow of electric charge) through an electric circuit. The D'Arsonval/Weston form used today is constructed with a small pivoting coil of wire in the field of a permanent magnet. The coil is attached to a thin pointer that traverses a calibrated scale. A tiny torsion spring pulls the coil and pointer to the zero position.
When a direct current (DC) flows through the coil, the coil generates a magnetic field. This field acts against the permanent magnet. The coil twists, pushing against the spring, and moves the pointer. The hand points at a scale indicating the electric current. Careful design of the pole pieces ensures that the magnetic field is uniform, so that the angular deflection of the pointer is proportional to the current. A useful meter generally contains provision for damping the mechanical resonance of the moving coil and pointer, so that the pointer settles quickly to its position without oscillation.
The basic sensitivity of a meter might be, for instance, 100 microamperes full scale (with a voltage drop of, say, 50 millivolts at full current). Such meters are often calibrated to read some other quantity that can be converted to a current of that magnitude. The use of current dividers, often called shunts, allows a meter to be calibrated to measure larger currents. A meter can be calibrated as a DC voltmeter if the resistance of the coil is known by calculating the voltage required to generate a full scale current. A meter can be configured to read other voltages by putting it in a voltage divider circuit. This is generally done by placing a resistor in series with the meter coil. A meter can be used to read resistance by placing it in series with a known voltage (a battery) and an adjustable resistor. In a preparatory step, the circuit is completed and the resistor adjusted to produce full scale deflection. When an unknown resistor is placed in series in the circuit the current will be less than full scale and an appropriately calibrated scale can display the value of the previously-unknown resistor.
Because the pointer of the meter is usually a small distance above the scale of the meter, parallax error can occur when the operator attempts to read the scale line that "lines up" with the pointer. To counter this, some meters include a mirror along the markings of the principal scale. The accuracy of the reading from a mirrored scale is improved by positioning one's head while reading the scale so that the pointer and the reflection of the pointer are aligned; at this point, the operator's eye must be directly above the pointer and any parallax error has been minimized.
Types:-
Extremely sensitive measuring equipment once used mirror galvanometers that substituted a mirror for the pointer. A beam of light reflected from the mirror acted as a long, massless pointer. Such instruments were used as receivers for early trans-Atlantic telegraph systems, for instance. The moving beam of light could also be used to make a record on a moving photographic film, producing a graph of current versus time, in a device called an oscillograph.
Today the main type of galvanometer mechanism still used is the moving coil D'Arsonval/Weston mechanism, which is used in traditional analog meters.
Tangent galvanometer:-
A tangent galvanometer is an early measuring instrument used for the measurement of electric current. It works by using a compass needle to compare a magnetic field generated by the unknown current to the magnetic field of the Earth. It gets its name from its operating principle, the tangent law of magnetism, which states that the tangent of the angle a compass needle makes is proportional to the ratio of the strengths of the two perpendicular magnetic fields. It was first described by Claude Pouillet in 1837.
A tangent galvanometer consists of a coil of insulated copper wire wound on a circular non-magnetic frame. The frame is mounted vertically on a horizontal base provided with levelling screws. The coil can be rotated on a vertical axis passing through its centre. A compass box is mounted horizontally at the centre of a circular scale. It consists of a tiny, powerful magnetic needle pivoted at the centre of the coil. The magnetic needle is free to rotate in the horizontal plane. The circular scale is divided into four quadrants. Each quadrant is graduated from 0° to 90°. A long thin aluminium pointer is attached to the needle at its centre and at right angle to it. To avoid errors due to parallax a plane mirror is mounted below the compass needle.
In operation, the instrument is first rotated until the magnetic field of the Earth, indicated by the compass needle, is parallel with the plane of the coil. Then the unknown current is applied to the coil. This creates a second magnetic field on the axis of the coil, perpendicular to the Earth's magnetic field. The compass needle responds to the vector sum of the two fields, and deflects to an angle equal to the tangent of the ratio of the two fields. From the angle read from the compass's scale, the current could be found from a table.
The current supply wires have to be wound in a small helix, like a pig's tail, otherwise the field due to the wire will affect the compass needle and an incorrect reading will be obtained.
uses:-
A major early use for galvanometers was for finding faults in telecommunications cables. They were superseded in this application late in the 20th century by time-domain reflectometers.
Probably the largest use of galvanometers was the D'Arsonval/Weston type movement used in analog meters in electronic equipment. Since the 1980s, galvanometer-type analog meter movements have been displaced by analog to digital converters (ADCs) for some uses. A digital panel meter (DPM) contains an analog to digital converter and numeric display. The advantages of a digital instrument are higher precision and accuracy, but factors such as power consumption or cost may still favor application of analog meter movements.

Sunday, November 15, 2009

Three-phase electric power

Three-phase electric power is a common method of alternating-current electric power transmission.It is a type of polyphase system, and is the most common method used by electric power distribution grids worldwide to distribute power. It is also used to power large motors and other large loads. A three-phase system is generally more economical than others because it uses less conductor material to transmit electric power than equivalent single-phase or two-phase systems at the same voltage.
In a three-phase system, three circuit conductors carry three alternating currents (of the same frequency) which reach their instantaneous peak values at different times. Taking one conductor as the reference, the other two currents are delayed in time by one-third and two-thirds of one cycle of the electrical current. This delay between phases has the effect of giving constant power transfer over each cycle of the current, and also makes it possible to produce a rotating magnetic field in an electric motor.Three-phase systems may or may not have a neutral wire. A neutral wire allows the three-phase system to use a higher voltage while still supporting lower-voltage single-phase appliances. In high-voltage distribution situations, it is common not to have a neutral wire as the loads can simply be connected between phases (phase-phase connection).
Three-phase has properties that make it very desirable in electric power systems:
The phase currents tend to cancel out one another, summing to zero in the case of a linear balanced load. This makes it possible to eliminate or reduce the size of the neutral conductor; all the phase conductors carry the same current and so can be the same size, for a balanced load. Power transfer into a linear balanced load is constant, which helps to reduce generator and motor vibrations. Three-phase systems can produce a magnetic field that rotates in a specified direction, which simplifies the design of electric motors. Three is the lowest phase order to exhibit all of these properties.Most household loads are single-phase. In North America and some other countries, three-phase power generally does not enter homes. Even in areas where it does, it is typically split out at the main distribution board and the individual loads are fed from a single phase. Sometimes it is used to power electric stoves and washing machines.The three phases are typically indicated by colors which vary by country. See the table for more information

Two-phase electrical power

Two-phase electrical power was an early 20th century polyphase alternating current electric power distribution system. Two circuits were used, with voltage phases differing by 90 degrees. Usually circuits used four wires, two for each phase. Less frequently, three wires were used, with a common wire with a larger-diameter conductor. Some early two-phase generators had two complete rotor and field assemblies, with windings physically offset by 90 electrical degrees to provide two-phase power. The generators at Niagara Falls installed in 1895 were the largest generators in the world at the time and were two-phase machines.
The advantage of two-phase electrical power was that it allowed for simple, self-starting electric motors. In the early days of electrical engineering, it was easier to analyze and design two-phase systems where the phases were completely separated. [1] It was not until the invention of the method of symmetrical components in 1918 that polyphase power systems had a convenient mathematical tool for describing unbalanced load cases. The revolving magnetic field produced with a two-phase system allowed electric motors to provide torque from zero motor speed, which was not possible with a single-phase induction motor (without extra starting means). Induction motors designed for two-phase operation use the same winding configuration as capacitor start single-phase motors.
Three-phase electric power requires less conductor mass for the same voltage and overall amount of power, compared with a two-phase four-wire circuit of the same carrying capacity. It has all but replaced two-phase power for commercial distribution of electrical energy, but two-phase circuits are still found in certain control systems.
Two-phase circuits typically use two separate pairs of current-carrying conductors. Alternatively, three wires may be used, but the common conductor carries the vector sum of the phase currents, which requires a larger conductor. Three phase can share conductors so that the three phases can be carried on three conductors of the same size. In electrical power distribution, a requirement of only three conductors rather than four represented a considerable distribution-wire cost savings due to the expense of conductors and installation.
Two-phase power can be derived from a three-phase source using two transformers in a Scott connection. One transformer primary is connected across two phases of the supply. The second transformer is connected to a center-tap of the first transformer, and is wound for 86.6% of the phase-to-phase voltage on the 3-phase system. The secondaries of the transformers will have two phases 90 degrees apart in time, and a balanced two-phase load will be evenly balanced over the three supply phases.Three-wire, 120/240 volt single phase power used in the USA and Canada is sometimes incorrectly called "two-phase". The proper term is split phase or 3-wire single-phase.

Single-phase electric power

In electrical engineering, single-phase electric power refers to the distribution of alternating current electric power using a system in which all the voltages of the supply vary in unison. Single-phase distribution is used when loads are mostly lighting and heating, with few large electric motors. A single-phase supply connected to an alternating current electric motor does not produce a revolving magnetic field; single-phase motors need additional circuits for starting, and such motors are uncommon above 10 or 20 kW in rating.
In contrast, in a three-phase system, the currents in each conductor reach their peak instantaneous values sequentially, not simultaneously; in each cycle of the power frequency, first one, then the second, then the third current reaches its maximum value. The waveforms of the three supply conductors are offset from one another in time (delayed in phase) by one-third of their period.Standard frequencies of single-phase power systems are either 50 or 60 Hz. Special single-phase traction power networks may operate at 16.67 Hz or other frequencies to power electric railways.
Splitting out:-
No arrangement of transformers can convert a single-phase load into a balanced load on a polyphase system. A single-phase load may be powered from a three-phase distribution system either by connection between a phase and neutral or by connecting the load between two phases. The load device must be designed for the voltage in each case. The neutral point in a three phase system exists at the mathematical center of an equilateral triangle formed by the three phase points, and the phase-to-phase voltage is accordingly times the phase-to-neutral voltage.[1] For example, in places using a 415 volt 3 phase system, the phase-to-neutral voltage is 240 volts, allowing single-phase lighting to be connected phase-to-neutral and three-phase motors to be connected to all three phases.
In North America, a typical three-phase system will have 208 volts between the phases and 120 volts between phase and neutral. If heating equipment designed for the 240-volt three-wire single phase system is connected to two phases of a 208 volt supply, it will only produce 75% of its rated heating effect. Single-phase motors may have taps to allow their use on either 208 V or 240 V supplies.
On higher voltage systems (kilovolts) where a single phase transformer is in use to supply a low voltage system the method of splitting varies. In North America utility distribution practice, the primary of the step-down transformer is wired across a single high voltage feed wire and neutral, at least for smaller supplies (see photo of transformer on right). Rural distribution may be a single phase at a medium voltage; in some areas single wire earth return distribution is used when customers are very far apart. In Britain the step-down primary is wired phase-phase.
Applications:-

Single-phase power distribution is widely used especially in rural areas, where the cost of a three-phase distribution network is high and motor loads are small and uncommon.
High power systems, say, hundreds of kVA or larger, are nearly always three phase. The largest supply normally available as single phase varies according to the standards of the electrical utility. In the UK a single-phase household supply may be rated 100 A or even 125 A, meaning that there is little need for 3 phase in a domestic or small commercial environment. Much of the rest of Europe has traditionally had much smaller limits on the size of single phase supplies resulting in even houses being supplied with 3 phase (in urban areas with three-phase supply networks).
In North America, individual residences and small commercial buildings with services up to about 100 kV·A (417 amperes at 240 volts) will usually have three-wire single-phase distribution, often with only one customer per distribution transformer. In exceptional cases larger single-phase three-wire services can be provided, usually only in remote areas where polyphase distribution is not available. In rural areas farmers who wish to use three-phase motors may install a phase converter if only a single-phase supply is available. Larger consumers such as large buildings, shopping centres, factories, office blocks, and multiple-unit apartment blocks will have three-phase service. In densely-populated areas of cities, network power distribution is used with many customers and many supply transformers connected to provide hundreds or thousands of kV·A load concentrated over a few hundred square metres.
Three-wire single-phase systems are rarely used in the UK where large loads are needed off only two high voltage phases.
Single-phase power may be used for electric railways; the largest single-phase generator in the world, at Neckarwestheim Nuclear Power Plant, supplies a railway system on a dedicated traction power network.

Electric field

In physics, the space surrounding an electric charge or in the presence of a time-varying magnetic field has a property called an electric field. This electric field exerts a force on other electrically charged objects. The concept of an electric field was introduced by Michael Faraday.
The electric field is a vector field with SI units of newtons per coulomb (N C−1) or, equivalently, volts per metre (V m−1). The SI base units of the electric field are kg·m·s−3·A−1. The strength of the field at a given point is defined as the force that would be exerted on a positive test charge of +1 coulomb placed at that point; the direction of the field is given by the direction of that force. Electric fields contain electrical energy with energy density proportional to the square of the field intensity. The electric field is to charge as gravitational acceleration is to mass and force density is to volume.A moving charge has not just an electric field but also a magnetic field, and in general the electric and magnetic fields are not completely separate phenomena; what one observer perceives as an electric field, another observer in a different frame of reference perceives as a mixture of electric and magnetic fields. For this reason, one speaks of "electromagnetism" or "electromagnetic fields." In quantum mechanics, disturbances in the electromagnetic fields are called photons, and the energy of photons is quantized.

Tuesday, November 10, 2009

Wattmeter


The wattmeter is an instrument for measuring the electric power (or the supply rate of electrical energy) in watts of any given circuit.
Electrodynamic:-
The traditional analog wattmeter is an electrodynamic instrument. The device consists of a pair of fixed coils, known as current coils, and a movable coil known as the potential coil.
The current coils connected in series with the circuit, while the potential coil is connected in parallel. Also, on analog wattmeters, the potential coil carries a needle that moves over a scale to indicate the measurement. A current flowing through the current coil generates an electromagnetic field around the coil. The strength of this field is proportional to the line current and in phase with it. The potential coil has, as a general rule, a high-value resistor connected in series with it to reduce the current that flows through it.
The result of this arrangement is that on a dc circuit, the deflection of the needle is proportional to both the current and the voltage, thus conforming to the equation W=VA or P=VI. On an ac circuit the deflection is proportional to the average instantaneous product of voltage and current, thus measuring true power, and possibly (depending on load characteristics) showing a different reading to that obtained by simply multiplying the readings showing on a stand-alone voltmeter and a stand-alone ammeter in the same circuit.
The two circuits of a wattmeter can be damaged by excessive current. The ammeter and voltmeter are both vulnerable to overheating — in case of an overload, their pointers will be driven off scale — but in the wattmeter, either or even both the current and potential circuits can overheat without the pointer approaching the end of the scale! This is because the position of the pointer depends on the power factor, voltage and current. Thus, a circuit with a low power factor will give a low reading on the wattmeter, even when both of its circuits are loaded to the maximum safety limit. Therefore, a wattmeter is rated not only in watts, but also in volts and amperes.
Electrodynamometer:-
An early current meter was the electrodynamometer. Used in the early 20th century, the Siemens electrodynamometer, for example, is a form of an electrodynamic ammeter, that has a fixed coil which is surrounded by another coil having its axis at right angles to that of the fixed coil. This second coil is suspended by a number of silk fibres, and to the coil is also attached a spiral spring the other end of which is fastened to a torsion head. If then the torsion head is twisted, the suspended coil experiences a torque and is displaced through and angle equal to that of the torsion head. The current can be passed into and out of the movable coil by permitting the ends of the coil to dip into two mercury cups.
If a current is passed through the fixed coil and movable coil in series with one another, the movable coil tends to displace itself so as to bring the axes of the coils, which are normally at right angles, more into the same direction. This tendency can be resisted by giving a twist to the torsion head and so applying to the movable coil through the spring a restoring torque, which opposes the torque due to the dynamic action of the currents. If then the torsion head is provided with an index needle, and also if the movable coil is provided with an indicating point, it is possible to measure the torsional angle through which the head must be twisted to bring the movable coil back to its zero position. In these circumstances, the torsional angle becomes a measure of the torque and therefore of the product of the strengths of the currents in the two coils, that is to say, of the square of the strength of the current passing through the two coils if they are joined up in series. The instrument can therefore be graduated by passing through it known and measured continuous currents, and it then becomes available for use with either continuous or alternating currents. The instrument can be provided with a curve or table showing the current corresponding to each angular displacement of the torsion head.


Integrated circuit

In electronics, an integrated circuit (also known as IC, microcircuit, microchip, silicon chip, or chip) is a miniaturized electronic circuit (consisting mainly of semiconductor devices, as well as passive components) that has been manufactured in the surface of a thin substrate of semiconductor material. Integrated circuits are used in almost all electronic equipment in use today and have revolutionized the world of electronics.
A hybrid integrated circuit is a miniaturized electronic circuit constructed of individual semiconductor devices, as well as passive components, bonded to a substrate or circuit board.
Introduction:-
Integrated circuits were made possible by experimental discoveries which showed that semiconductor devices could perform the functions of vacuum tubes, and by mid-20th-century technology advancements in semiconductor device fabrication. The integration of large numbers of tiny transistors into a small chip was an enormous improvement over the manual assembly of circuits using discrete electronic components. The integrated circuit's mass production capability, reliability, and building-block approach to circuit design ensured the rapid adoption of standardized ICs in place of designs using discrete transistors.
There are two main advantages of ICs over discrete circuits: cost and performance. Cost is low because the chips, with all their components, are printed as a unit by photolithography and not constructed one transistor at a time. Furthermore, much less material is used to construct a circuit as a packaged IC die than as a discrete circuit. Performance is high since the components switch quickly and consume little power (compared to their discrete counterparts) because the components are small and close together. As of 2006, chip areas range from a few square millimeters to around 350 mm2, with up to 1 million transistors per mm2.
Invantion:-
The idea of an integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the British Ministry of Defence, Geoffrey W.A. Dummer (1909-2002), who published it at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952. He gave many symposia publicly to propagate his ideas.
Dummer unsuccessfully attempted to build such a circuit in 1956.
The integrated circuit can be credited as being invented by both Jack Kilby of Texas Instruments and Robert Noyce of Fairchild Semiconductor [3] working independently of each other. Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the first working integrated circuit on September 12, 1958. In his patent application of February 6, 1959, Kilby described his new device as “a body of semiconductor material ... wherein all the components of the electronic circuit are completely integrated.”
Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit. Robert Noyce also came up with his own idea of integrated circuit, half a year later than Kilby. Noyce's chip had solved many practical problems that the microchip developed by Kilby had not. Noyce's chip, made at Fairchild, was made of silicon, whereas Kilby's chip was made of germanium.
Early developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device showing five transistors on a common substrate arranged in a 2-stage amplifier arrangement. Jacobi discloses small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.
A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy). However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.
The aforementioned Noyce credited Kurt Lehovec of Sprague Electric for the principle of p-n junction isolation caused by the action of a biased p-n junction (the diode) as a key concept behind the IC.

History of electrical engineering

This article details the history of electrical engineering. Topics also included are the general developments and notable individuals within the electrical engineering profession.
History:-
Thales of Miletus, an ancient greek philosopher, writing at around 600 BCE, described a form of static electricity, noting that rubbing fur on various substances, such as amber, would cause a particular attraction between the two. He noted that the amber buttons could attract light objects such as hair and that if they rubbed the amber for long enough they could even get a spark to jump.At around 450 B.C. Democritus, a later Greek philosopher, developed an atomic theory that was remarkably similar to our modern atomic theory. His mentor, Leucippus, is credited with this same theory. The hypothesis of Leucippus and Democritus held everything to be composed of atoms. But these atoms, called "atomos", were indivisible, and indestructible. He presciently stated that between atoms lies empty space, and that atoms are constantly in motion. He was incorrect only in stating that atoms come different sizes and shapes. Each object had its own shaped and sized atom.
An object found in Iraq in 1938, dated to about 250 BCE and called the Baghdad Battery, resembles a galvanic cell and is believed by some to have been used for electroplating in Mesopotamia, although this has not yet been proven.
Thales of Miletus, an ancient greek philosopher, writing at around 600 BCE, described a form of static electricity, noting that rubbing fur on various substances, such as amber, would cause a particular attraction between the two. He noted that the amber buttons could attract light objects such as hair and that if they rubbed the amber for long enough they could even get a spark to jump.
At around 450 B.C. Democritus, a later Greek philosopher, developed an atomic theory that was remarkably similar to our modern atomic theory. His mentor, Leucippus, is credited with this same theory. The hypothesis of Leucippus and Democritus held everything to be composed of atoms. But these atoms, called "atomos", were indivisible, and indestructible. He presciently stated that between atoms lies empty space, and that atoms are constantly in motion. He was incorrect only in stating that atoms come different sizes and shapes. Each object had its own shaped and sized atom.
An object found in Iraq in 1938, dated to about 250 BCE and called the Baghdad Battery, resembles a galvanic cell and is believed by some to have been used for electroplating in Mesopotamia, although this has not yet been proven.
During the latter part of the 1800s, the study of electricity was largely considered to be a subfield of physics. It was not until the late 19th century that universities started to offer degrees in electrical engineering. In 1882, Darmstadt University of Technology founded the first chair and the first faculty of electrical engineering worldwide. In the same year, under Professor Charles Cross, the Massachusetts Institute of Technology began offering the first option of Electrical Engineering within a physics department. In 1883, Darmstadt University of Technology and Cornell University introduced the world's first courses of study in electrical engineering and in 1885 the University College London founded the first chair of electrical engineering in the United Kingdom. The University of Missouri subsequently established the first department of electrical engineering in the United States in 1886.
During this period work in the area increased dramatically. In 1882 Edison switched on the world's first large-scale electrical supply network that provided 110 volts direct current to fifty-nine customers in lower Manhattan. In 1887 Nikola Tesla filed a number of patents related to a competing form of power distribution known as alternating current. In the following years a bitter rivalry between Tesla and Edison, known as the "War of Currents", took place over the preferred method of distribution. AC eventually replaced DC for generation and power distribution, enormously extending the range and improving the safety and efficiency of power distribution.The efforts of the two did much to further electrical engineering—Tesla's work on induction motors and polyphase systems influenced the field for years to come, while Edison's work on telegraphy and his development of the stock ticker proved lucrative for his company, which ultimately became General Electric.However, by the end of the 19th century, other key figures in the progress of electrical engineering were beginning to emerge.Charles Proteus Steinmetz helped foster the development of alternating current that made possible the expansion of the electric power industry in the United States, formulating mathematical theories for engineers.

Monday, November 9, 2009

Hybrid electric vehicle

A hybrid electric vehicle (HEV) combines a conventional internal combustion engine propulsion system with an electric propulsion system. The presence of the electric powertrain is intended to achieve either better fuel economy than a conventional vehicle, or better performance. A variety of types of HEV exist, and the degree to which they function as EVs varies as well. The most common form of HEV is the hybrid electric car, although hybrid electric trucks (pickups and tractors) also exist.
Modern HEVs make use of efficiency-improving technologies such as regenerative braking, which converts the vehicle's kinetic energy into battery-replenishing electric energy, rather than wasting it as heat energy as conventional brakes do. Some varieties of HEVs use their internal combustion engine to generate electricity by spinning an electrical generator (this combination is known as a motor-generator), to either recharge their batteries or to directly power the electric drive motors. Many HEVs reduce idle emissions by shutting down the ICE at idle and restarting it when needed; this is known as a start-stop system. A hybrid-electric produces less emissions from its ICE than a comparably-sized gasoline car, as an HEV's gasoline engine is usually smaller than a pure fossil-fuel vehicle, and if not used to directly drive the car, can be geared to run at maximum efficiency, further improving fuel economy.
The hybrid-electric vehicle did not become widely available until the release of the Toyota Prius in Japan in 1997, followed by the Honda Insight in 1999. While initially perceived as unnecessary due to the low cost of gasoline, worldwide increases in the price of petroleum caused many automakers to release hybrids in the late 2000s; they are now perceived as a core segment of the automotive market of the future.Worldwide sales of hybrid vehicles produced by Toyota reached 1.0 million vehicles by May 31, 2007, and the 2.0 million mark was reached by August 31, 2009, with hybrids sold in 50 countries. Worldwide sales are led by the Prius, with cumulative sales of 1.43 million by Augut 2009. The second-generation Honda Insight was the top-selling vehicle in Japan in April 2009, marking the first occasion that an HEV has received the distinction. American automakers have made development of hybrid cars a top priority.
History:-
In 1901, while employed at Lohner Coach Factory, Ferdinand Porsche designed the Mixte, a 4WD series-hybrid version of "System Lohner-Porsche" electric carriage previously appeared in 1900 Paris Salon. The Mixte included a pair of generators driven by 2.5-hp Daimler IC engines to extend operating range. The Mixte broke several Austrian speed records, and also won the Exelberg Rally in 1901 with Porsche himself driving. The Mixte used a gasoline engine powering a generator, which in turn powered electric hub motors, with a small battery pack for reliability. It had a range of 50 km, a top speed of 50 km/h and a power of 5.22 kW during 20 minutes.
In 1905, H. Piper filed a US patent application for a hybrid vehicle.
The 1915 Dual Power, made by the Woods Motor Vehicle electric car maker, had a four-cylinder ICE and an electric motor. Below 15 mph (25 km/h) the electric motor alone drove the vehicle, drawing power from a battery pack, and above this speed the "main" engine cut in to take the car up to its 35 mph (55 km/h) top speed. About 600 were made up to 1918.
The first gasoline-electric hybrid car was released by the Woods Motor Vehicle Company of Chicago in 1917. The hybrid was a commercial failure, proving to be too slow for its price, and too difficult to service.
In 1931 Erich Gaichen invented and drove from Altenburg to Berlin a 1/2 horsepower electric car containing features later incorporated into hybrid cars. Its maximum speed was 25 miles per hour (40 km/h), but it was licensed by the Motor Transport Office, taxed by the German Revenue Department and patented by the German Reichs-Patent Amt. The car battery was re-charged by the motor when the car went downhill. Additional power to charge the battery was provided by a cylinder of compressed air which was re-charged by small air pumps activated by vibrations of the chassis and the brakes and by igniting oxyhydrogen gas. An account of the car and his characterization as a "crank inventor" can be found in Arthur Koestler's autobiography, Arrow in the Blue, pages 269-271, which summarize a contemporaneous newspaper account written by Koestler. No production beyond the prototype was reported.
Current technologey:-
A more recent working prototype of the HEV was built by Victor Wouk (one of the scientists involved with the Henney Kilowatt, the first transistor-based electric car). Wouk's work with HEVs in the 1960s and 1970s earned him the title as the "Godfather of the Hybrid".Wouk installed a prototype hybrid drivetrain (with a 16 kW electric motor) into a 1972 Buick Skylark provided by GM for the 1970 Federal Clean Car Incentive Program, but the program was stopped by the United States Environmental Protection Agency (EPA) in 1976 while Eric Stork, the head of the EPA at the time, was accused of a prejudicial coverup.
The regenerative braking system, the core design concept of most production HEVs, was developed by electrical engineer David Arthurs around 1978 using off-the shelf components and an Opel GT. However the voltage controller to link the batteries, motor (a jet-engine starter motor), and DC generator was Arthurs'. The vehicle exhibited 75 miles per US gallon (3.1 L/100 km; 90 mpg-imp) fuel efficiency and plans for it (as well as somewhat updated versions) are still available through the Mother Earth News web site. The Mother Earth News' own 1980 version claimed nearly 84 miles per US gallon (2.8 L/100 km; 101 mpg-imp).
In 1989, Audi produced its first iteration of the Audi Duo (or Audi 100 Avant duo) experimental vehicle, a plug-in parallel hybrid based on the Audi 100 Avant quattro. This car had a 12.6 bhp Siemens electric motor which drove the rear wheels. A trunk-mounted nickel-cadmium battery supplied energy to the motor that drove the rear wheels. The vehicle's front wheels were powered by a 2.3-litre five-cylinder engine with an output of 136 bhp (101 kW). The intent was to produce a vehicle which could operate on the engine in the country and electric mode in the city. Mode of operation could be selected by the driver. Just ten vehicles are believed to have been made; one drawback was that due to the extra weight of the electric drive, the vehicles were less efficient when running on their engines alone than standard Audi 100s with the same engine.
Two years later, Audi, unveiled the second duo generation - likewise based on the Audi 100 Avant quattro. Once again this featured an electric motor, a 28.6 bhp (21.3 kW) three-phase machine, driving the rear wheels. This time, however, the rear wheels were additionally powered via the Torsen differential from the main engine compartment, which housed a 2.0-litre four-cylinder engine.
The Bill Clinton administration initiated the Partnership for a New Generation of Vehicles (PNGV) program on 29 September 1993 that involved Chrysler, Ford, General Motors, USCAR, the DoE, and other various governmental agencies to engineer the next efficient and clean vehicle. The NRC cited automakers’ moves to produce HEVs as evidence that technologies developed under PNGV were being rapidly adopted on production lines, as called for under Goal 2. Based on information received from automakers, NRC reviewers questioned whether the “Big Three” would be able to move from the concept phase to cost effective, pre-production prototype vehicles by 2004, as set out in Goal 3. The program was replaced by the hydrogen-focused FreedomCAR initiative by the George W. Bush administration in 2001, an initiative to fund research too risky for the private sector to engage in, with the long-term goal of developing effectively carbon emission- and petroleum-free vehicles.

Capacitance

In electromagnetism and electronics, capacitance is the ability of a body to hold an electrical charge. Capacitance is also a measure of the amount of electric charge stored (or separated) for a given electric potential. A common form of charge storage device is a parallel-plate capacitor. In a parallel plate capacitor capacitance is directly proportional to the surface area of the conductor plates and inversely proportional to the separation distance between the plates. If the charges on the plates are +Q and −Q, and V gives the voltage between the plates, then the capacitance is given by.

C=Q/V


Sunday, November 8, 2009

Asynchronous circuit

An asynchronous circuit is a circuit in which the parts are largely autonomous. They are not governed by a clock circuit or global clock signal, but instead need only wait for the signals that indicate completion of instructions and operations. These signals are specified by simple data transfer protocols. This digital logic design is contrasted with a synchronous circuit which operates according to clock timing signals.
Theoretical foundations:-
Petri Nets are an attractive and powerful model for reasoning about asynchronous circuits. However Petri nets have been criticized by Carl Hewitt for their lack of physical realism (see Petri net). Subsequent to Petri nets other models of concurrency have been developed that can model asynchronous circuits including the Actor model and process calculi.
The term asynchronous logic is used to describe a variety of design styles, which use different assumptions about circuit properties. These vary from the bundled delay model - which uses 'conventional' data processing elements with completion indicated by a locally generated delay model - to delay-insensitive design - where arbitrary delays through circuit elements can be accommodated. The latter style tends to yield circuits which are larger than bundled data implementations, but which are insensitive to layout and parametric variations and are thus "correct by design."
Benefits:-
Different classes of asynchronous circuitry offer different advantages. Below is a list of the advantages offered by Quasi Delay Insensitive Circuits, generally agreed to be the most "pure" form of asynchronous logic that retains computational universality. Less pure forms of asynchronous circuitry offer better performance at the cost of compromising one or more of these advantages:
Robust handling of metastability of arbiters. Early Completion of a circuit when it is known that the inputs which have not yet arrived are irrelevant. Possibly lower power consumption because no transistor ever transitions unless it is performing useful computation (clock gating in synchronous designs is an imperfect approximation of this ideal). Also, clock drivers can be removed which can significantly reduce power consumption. However, when using certain encodings, asynchronous circuits may require more area, which can result in increased power consumption if the underlying process has poor leakage properties (for example, deep submicrometer processes used prior to the introduction of high-K dielectrics). Freedom from the ever-worsening difficulties of distributing a high-fanout, timing-sensitive clock signal. Better modularity and composability. Far fewer assumptions about the manufacturing process are required (most assumptions are timing assumptions). Circuit speed is adapted on the fly to changing temperature and voltage conditions rather than being locked at the speed mandated by worst-case assumptions. Immunity to transistor-to-transistor variability in the manufacturing process, which is one of the most serious problems facing the semiconductor industry as dies shrink. Less severe electromagnetic interference. Synchronous circuits create a great deal of EMI in the frequency band at (or very near) their clock frequency and its harmonics; asynchronous circuits generate EMI patterns which are much more evenly spread across the spectrum. In asynchronous circuits, local signaling eliminates the need for global synchronization which exploits some potential advantages in comparison with synchronous ones. They have shown potential specifications in low power consumption, design reuse, improved noise immunity and electromagnetic compatibility. Asynchronous circuits are more tolerant to process variations and external voltage fluctuations‎[1]. Less stress on the power distribution network. Synchronous circuits tend to draw a large amount of current right at the clock edge and shortly thereafter. The number of nodes switching (and thence, amount of current drawn) drops off rapidly after the clock edge, reaching zero just before the next clock edge. In an asynchronous circuit, the switching times of the nodes are not correlated in this manner, so the current draw tends to be more uniform and less bursty.
Disadvantages:-
Requires people experienced in synchronous design to learn a new style. Performance analysis of asynchronous circuits may be challenging.
Application:-
Asynchronous CPUs are one of several ideas for radically changing CPU design.
Unlike a conventional processor, a clockless processor (asynchronous CPU) has no central clock to coordinate the progress of data through the pipeline. Instead, stages of the CPU are coordinated using logic devices called "pipeline controls" or "FIFO sequencers." Basically, the pipeline controller clocks the next stage of logic when the existing stage is complete. In this way, a central clock is unnecessary. It may actually be even easier to implement high performance devices in asynchronous, as opposed to clocked, logic:
components can run at different speeds on an asynchronous CPU; all major components of a clocked CPU must remain synchronized with the central clock; a traditional CPU cannot "go faster" than the expected worst-case performance of the slowest stage/instruction/component. When an asynchronous CPU completes an operation more quickly than anticipated, the next stage can immediately begin processing the results, rather than waiting for synchronization with a central clock. An operation might finish faster than normal because of attributes of the data being processed (e.g., multiplication can be very fast when multiplying by 0 or 1, even when running code produced by a naive compiler), or because of the presence of a higher voltage or bus speed setting, or a lower ambient temperature, than 'normal' or expected. Asynchronous logic proponents believe these capabilities would have these benefits:
lower power dissipation for a given performance level, and highest possible execution speeds. The biggest disadvantage of the clockless CPU is that most CPU design tools assume a clocked CPU (i.e., a synchronous circuit). Many tools "enforce synchronous design practices"[1]. Making a clockless CPU (designing an asynchronous circuit) involves modifying the design tools to handle clockless logic and doing extra testing to ensure the design avoids metastable problems. The group that designed the AMULET, for example, developed a tool called LARD to cope with the complex design of AMULET3.
Despite the difficulty of doing so, numerous asynchronous CPUs have been built, including:
the ORDVAC (?) and the (identical) ILLIAC I (1951), [2] the ILLIAC II (1962); The Caltech Asynchronous Microprocessor, the world-first asynchronous microprocessor (1988); the ARM-implementing AMULET (1993 and 2000); the asynchronous implementation of MIPS R3000, dubbed MiniMIPS (1998); an ARM-compatible processor (2003?) designed by Z. C. Yu, S. B. Furber, and L. A. Plana; "designed specifically to explore the benefits of asynchronous design for security sensitive applications";[3] the "Network-based Asynchronous Architecture" processor (2005) that executes a subset of the MIPS architecture instruction set;[3] the SEAforth multi-core processor (2008) from Charles H. Moore.[4] The ILLIAC II was the first completely asynchronous, speed independent processor design ever built; it was the most powerful computing machine known to man at the time.
DEC PDP-16 Register Transfer Modules (ca. 1973) allowed the experimenter to construct asynchronous, 16-bit processing elements. Delays for each module were fixed and based on the module's worst-case timing.
The Caltech Asynchronous Microprocessor (1988) was the first asynchronous microprocessor (1988). Caltech designed and manufactured the world's first fully Quasi Delay Insensitive processor.[citation needed] During demonstrations, the researchers amazed viewers by loading a simple program which ran in a tight loop, pulsing one of the output lines after each instruction. This output line was connected to an oscilloscope. When a cup of hot coffee was placed on the chip, the pulse rate (the effective "clock rate") naturally slowed down to adapt to the worsening performance of the heated transistors. When liquid nitrogen was poured on the chip, the instruction rate shot up with no additional intervention. Additionally, at lower temperatures, the voltage supplied to the chip could be safely increased, which also improved the instruction rate—again, with no additional configuration.
In 2004, Epson manufactured the world's first flexible microprocessor called ACT11, an 8-bit asynchronous chip. Synchronous flexible processors are slower, since bending the material on which a chip is fabricated causes wild and unpredictable variations in the delays of various transistors, for which worst case scenarios must be assumed everywhere and everything must be clocked at worst case speed. The processor is intended for use in smart cards, whose chips are currently limited in size to those small enough that they can remain perfectly rigid.

Ammeter

An ammeter is a measuring instrument used to measure the electric current in a circuit. Electric currents are measured in amperes (A), hence the name. Smaller values of current can be measured using a milliameter or a microammeter. Early ammeters were laboratory instruments only which relied on the Earth's magnetic field for operation. By the late 19th century, improved instruments were designed which could be mounted in any position and allowed accurate measurements in electric power systems.
History:-
The relation between electric currents, magnetic fields and physical forces was first noted by Hans Christian Ørsted who in 1820 observed a compass needle was deflected from pointing North when a current flowed in an adjacent wire. The tangent galvanometer was used to measure currents using this effect, where the restoring force returning the pointer to the zero position was provided by the Earth's magnetic field. This made these instruments usable only when aligned with the Earth's field. Sensitivity of the instrument was increased by using additional turns of wire to multiply the effect - the instruments were called "multipliers".
Type:-

The D'Arsonval galvanometer is a moving coil ammeter. It uses magnetic deflection, where current passing through a coil causes the coil to move in a magnetic field. The voltage drop across the coil is kept to a minimum to minimize resistance across the ammeter in any circuit into which it is inserted. The modern form of this instrument was developed by Edward Weston, and uses two spiral springs to provide the restoring force. By maintaining a uniform air gap between the iron core of the instrument and the poles of its permanent magnet, the instrument has good linearity and accuracy. Basic meter movements can have full-scale deflection for currents from about 25 microamperes to 10 millamperes and have linear scales.
Moving iron ammeters use a piece of iron which moves when acted upon by the electromagnetic force of a fixed coil of wire. This type of meter responds to both direct and alternating currents (as opposed to the moving coil ammeter, which works on direct current only). The iron element consists of a moving vane attached to a pointer, and a fixed vane, surrounded by a coil. As alternating or direct current flows through the coil and induces a magnetic field in both vanes. The vanes repel each other and the moving vane deflects against the restoring force provided by fine helical springs.
An electrodynamic movement uses an electromagnet instead of the permanent magnet of the d'Arsonval movement. This instrument can respond to both alternating and direct current.
In a hot-wire ammeter, a current passes through a wire which expands as it heats. Although these instruments have slow response time and low accuracy, they are sometimes useful in measuring radio-frequency current.
Digital ammeter designs use an analog to digital converter (ADC) to measure the voltage across the shunt resistor; the digital display is calibrated to read the current through the shunt.
Application:-
To measure larger currents, a resistor called a shunt is placed in parallel with the meter. Most of the current flows through the shunt, and only a small fraction flows through the meter. This allows the meter to measure large currents. Traditionally, the meter used with a shunt has a full-scale deflection (FSD) of 50 mV, so shunts are typically designed to produce a voltage drop of 50 mV when carrying their full rated current.
Zero-center ammeters are used for applications requiring current to be measured with both polarities, common in scientific and industrial equipment. Zero-center ammeters are also commonly placed in series with a battery. In this application, the charging of the battery deflects the needle to one side of the scale (commonly, the right side) and the discharging of the battery deflects the needle to the other side.
Since the ammeter shunt has a very low resistance, mistakenly wiring the ammeter in parallel with a voltage source will cause a short circuit, at best blowing a fuse, possibly damaging the instrument and wiring, and exposing an observer to injury.
In AC circuits, a current transformer converts the magnetic field around a conductor into a small AC current, typically either 1 or 5 Amps at full rated current, that can be easily read by a meter. In a similar way, accurate AC/DC non-contact ammeters have been constructed using Hall effect magnetic field sensors. A portable hand-held clamp-on ammeter is a common tool for maintenance of industrial and commercial electrical equipment, which is temporarily clipped over a wire to measure current.

Saturday, November 7, 2009

Electric charge


Electric charge is a fundamental conserved property of some subatomic particles, which determines their electromagnetic interaction. Electrically charged matter is influenced by, and produces, electromagnetic fields. The interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces.
The electric charge on a body may be positive or negative. Two positively charged bodies experience a mutual repulsive force, as do two negatively charged bodies. A positively charged body and a negatively charged body experience an attractive force. The study of how charged bodies interact is classical electrodynamics, which is accurate insofar as quantum effects can be ignored.
Twentieth-century experiments demonstrated that electric charge is quantized: the charge of any system, body, or particle (except quarks) is an integer multiple of the elementary charge, e, approximately equal to 1.602×10−19 coulombs. The proton has a charge of e, and the electron has a charge of −e. The study of charged particles, and how their interactions are mediated by photons, is quantum electrodynamics

History:-
Electric charge is a characteristic property of many subatomic particles. The charges of free-standing particles are integer multiples of the elementary charge e; we say that electric charge is quantized. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan's oil-drop experiment demonstrated this fact directly, and measured the elementary charge.
By convention, the charge of an electron is −1, while that of a proton is +1. Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them.
The charge of an antiparticle equals that of the corresponding particle, but with opposite sign. Quarks have fractional charges of either −1⁄3 or +2⁄3, but free-standing quarks have never been observed (the theoretical reason for this fact is asymptotic freedom).
The electric charge of a macroscopic object is the sum of the electric charges of the particles that make it up. This charge is often zero, because matter is made of atoms, and atoms all have equal numbers of protons and electrons. More generally, in every molecule, the number of anions (negatively charged atoms) equals the number of cations (positively charged atoms). When the net electric charge is non-zero and motionless, the phenomenon is known as static electricity. Even when the net charge is zero, it can be distributed non-uniformly (e.g., due to an external electric field or to molecular motion), in which case the material is said to be polarized. The charge due to polarization is known as bound charge, while the excess charge brought from outside is called free charge. The motion of charged particles (especially the motion of electrons in metals) in a given direction is known as electric current.The SI unit of quantity of electric charge is the coulomb, which is equivalent to about 6.25 × 1018 e (e is the charge on a single electron or proton). Hence, the charge of an electron is approximately −1.602×10−19 C. The coulomb is defined as the quantity of charge that has passed through the cross-section of an electrical conductor carrying one ampere within one second. The symbol Q is often used to denote a quantity of electricity or charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer.
After finding the quantized character of charge, in 1891 Stoney proposed the unit 'electron' for this fundamental unit of electrical charge. This was before the discovery of the particle by J.J. Thomson in 1897. Today, the name "electron" for the unit of charge is no longer widely used except in the derived unit "electronvolt". This is quite surprising considering the wide use of this unit in the fields of physics and chemistry. The unit is today treated as nameless, referred to as "fundamental unit of charge" or simply as "e".
Formally, a measure of charge should be a multiple of the elementary charge e (charge is quantized), but since it is an average, macroscopic quantity, many orders of magnitude larger than a single elementary charge, it can effectively take on any real value. Furthermore, in some contexts it is meaningful to speak of fractions of a charge; e.g. in the charging of a capacitor.
As reported by the Ancient Greek philosopher Thales of Miletus around 600 BC, charge (or electricity) could be accumulated by rubbing fur on various substances, such as amber. The Greeks noted that the charged amber buttons could attract light objects such as hair. They also noted that if they rubbed the amber for long enough, they could even get a spark to jump. This property derives from the triboelectric effect.
In 1600 the English scientist William Gilbert returned to the subject in De Magnete, and coined the New Latin word electricus from ηλεκτρον (elektron), the Greek word for "amber", which soon gave rise to the English words "electric" and "electricity." He was followed in 1660 by Otto von Guericke, who invented what was probably the first electrostatic generator. Other European pioneers were Robert Boyle, who in 1675 stated that electric attraction and repulsion can act across a vacuum; Stephen Gray, who in 1729 classified materials as conductors and insulators; and C. F. du Fay, who proposed in 1733[1] that electricity came in two varieties which cancelled each other, and expressed this in terms of a two-fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and when amber was rubbed with fur, the amber was said to be charged with resinous electricity. In 1839, Michael Faraday showed that the apparent division between static electricity, current electricity and bioelectricity was incorrect, and all were a consequence of the behavior of a single kind of electricity appearing in opposite polarities. It is arbitrary which polarity you call positive and which you call negative. Positive charge can be defined as the charge left on a glass rod after being rubbed with silk.
One of the foremost experts on electricity in the 18th century was Benjamin Franklin, who argued in favour of a one-fluid theory of electricity. Franklin imagined electricity as being a type of invisible fluid present in all matter; for example he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained too little of the fluid it was "negatively" charged, and when it had an excess it was "positively" charged. Arbitrarily (or for a reason that was not recorded) he identified the term "positive" with vitreous electricity and "negative" with resinous electricity. William Watson arrived at the same explanation at about the same time.