Search Engine

Wednesday, December 9, 2009

Passive analogue filter development

Analogue filters are a basic building block of signal processing much used in electronics. Amongst their many applications are the separation of an audio signal before application to bass, mid-range and tweeter loudspeakers; the combining and later separation of multiple telephone conversations onto a single channel; the selection of a chosen radio station in a radio receiver and rejection of others. Passive linear electronic analogue filters are those filters which can be described with linear differential equations (linear); they are composed of capacitors, inductors and, sometimes, resistors (passive) and are designed to operate on continuously varying (analogue) signals. There are many linear filters which are not analogue in implementation (digital filter), and there are many electronic filters which may not have a passive topology – both of which may have the same transfer function of the filters described in this article. Analogue filters are most often used in wave filtering applications, that is, where it is required to pass particular frequency components and to reject others from analogue (continuous-time) signals.
Analogue filters have played an important part in the development of electronics. Especially in the field of telecommunications, filters have been of crucial importance in a number of technological breakthroughs and have been the source of enormous profits for telecommunications companies. It should come as no surprise, therefore, that the early development of filters was intimately connected with transmission lines. Transmission line theory gave rise to filter theory, which initially took a very similar form, and the main application of filters was for use on telecommunication transmission lines. However, the arrival of network synthesis techniques greatly enhanced the degree of control of the designer.Today, it is often preferred to carry out filtering in the digital domain where complex algorithms are much easier to implement, but analogue filters do still find applications, especially for low-order simple filtering tasks and are often still the norm at higher frequencies where digital technology is still impractical, or at least, less cost effective. Wherever possible, and especially at low frequencies, analogue filters are now implemented in a filter topology which is active in order to avoid the wound components required by passive topology.It is possible to design linear analogue mechanical filters using mechanical components which filter mechanical vibrations or acoustic waves. While there are few applications for such devices in mechanics per se, they can be used in electronics with the addition of transducers to convert to and from the electrical domain. Indeed some of the earliest ideas for filters were acoustic resonators because the electronics technology was poorly understood at the time. In principle, the design of such filters can be achieved entirely in terms of the electronic counterparts of mechanical quantities, with kinetic energy, potential energy and heat energy corresponding to the energy in inductors, capacitors and resistors respectively.

Saturday, December 5, 2009


A transistor is a semiconductor device commonly used to amplify or switch electronic signals. A transistor is made of a solid piece of a semiconductor material, with at least three terminals for connection to an external circuit. A voltage or current applied to one pair of the transistor's terminals changes the current flowing through another pair of terminals. Because the controlled (output) power can be much more than the controlling (input) power, the transistor provides amplification of a signal. Some transistors are packaged individually but most are found in integrated circuits.

Physicist Julius Edgar Lilienfeld filed the first patent for a transistor in Canada in 1925, describing a device similar to a Field Effect Transistor or "FET".However, Lilienfeld did not publish any research articles about his devices,[citation needed] and in 1934, German inventor Oskar Heil patented a similar device.In 1947, John Bardeen and Walter Brattain at AT&T's Bell Labs in the United States observed that when electrical contacts were applied to a crystal of germanium, the output power was larger than the input. Solid State Physics Group leader William Shockley saw the potential in this, and over the next few months worked to greatly expand the knowledge of semiconductors, and thus could be described as the "father of the transistor". The term was coined by John R. Pierce. According to physicist/historian Robert Arns, legal papers from the Bell Labs patent show that William Shockley and Gerald Pearson had built operational versions from Lilienfeld's patents, yet they never referenced this work in any of their later research papers or historical articles.The first silicon transistor was produced by Texas Instruments in 1954.[5] This was the work of Gordon Teal, an expert in growing crystals of high purity, who had previously worked at Bell Labs. The first MOS transistor actually built was by Kahng and Atalla at Bell Labs in 1960.
The transistor is considered by many to be one of the greatest inventions of the twentieth century. The transistor is the key active component in practically all modern electronics. Its importance in today's society rests on its ability to be mass produced using a highly automated process (fabrication) that achieves astonishingly low per-transistor costs.Although several companies each produce over a billion individually-packaged (known as discrete) transistors every year, the vast majority of transistors produced are in integrated circuits (often shortened to IC, microchips or simply chips) along with diodes, resistors, capacitors and other electronic components to produce complete electronic circuits. A logic gate consists of up to about twenty transistors whereas an advanced microprocessor, as of 2006, can use as many as 1.7 billion transistors (MOSFETs). "About 60 million transistors were built this year [2002] ... for [each] man, woman, and child on Earth.The transistor's low cost, flexibility, and reliability have made it a ubiquitous device. Transistorized mechatronic circuits have replaced electromechanical devices in controlling appliances and machinery. It is often easier and cheaper to use a standard microcontroller and write a computer program to carry out a control function than to design an equivalent mechanical control function.
The bipolar junction transistor, or BJT, was the most commonly used transistor in the 1960s and 70s. Even after MOSFETs became widely available, the BJT remained the transistor of choice for many analog circuits such as simple amplifiers because of their greater linearity and ease of manufacture. Desirable properties of MOSFETs, such as their utility in low-power devices, usually in the CMOS configuration, allowed them to capture nearly all market share for digital circuits; more recently MOSFETs have captured most analog and power applications as well, including modern clocked analog circuits, voltage regulators, amplifiers, power transmitters, motor drivers, etc.

Electrical impedance

Electrical impedance, or simply impedance, describes a measure of opposition to alternating current (AC). Electrical impedance extends the concept of resistance to AC circuits, describing not only the relative amplitudes of the voltage and current, but also the relative phases. When the circuit is driven with direct current (DC) there is no distinction between impedance and resistance; the latter can be thought of as impedance with zero phase angle.The symbol for impedance is usually \scriptstyle Z and it may be represented by writing its magnitude and phase in the form \scriptstyle Z \angle \theta . However, complex number representation is more powerful for circuit analysis purposes. The term impedance was coined by Oliver Heaviside in July 1886.Arthur Kennelly was the first to represent impedance with complex numbers in 1893.Impedance is defined as the frequency domain ratio of the voltage to the current. In other words, it is voltage–current ratio for a single complex exponential at a particular frequency ω. In general, impedance will be a complex number, but this complex number has the same units as resistance, for which the SI unit is the ohm. For a sinusoidal current or voltage input, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular,

Friday, December 4, 2009

Hall effect sensor

A Hall effect sensor is a transducer that varies its output voltage in response to changes in magnetic field. Hall sensors are used for proximity switching, positioning, speed detection, and current sensing applications.In its simplest form, the sensor operates as an analogue transducer, directly returning a voltage. With a known magnetic field, its distance from the Hall plate can be determined. Using groups of sensors, the relative position of the magnet can be deduced.Electricity carried through a conductor will produce a magnetic field that varies with current, and a Hall sensor can be used to measure the current without interrupting the circuit. Typically, the sensor is integrated with a wound core or permanent magnet that surrounds the conductor to be measured.Frequently, a Hall sensor is combined with circuitry that allows the device to act in a digital (on/off) mode, and may be called a switch in this configuration. Commonly seen in industrial applications such as the pictured pneumatic cylinder, they are also used in consumer equipment; for example some computer printers use them to detect missing paper and open covers. When high reliability is required, they are used in keyboards.Hall sensors are commonly used to time the speed of wheels and shafts, such as for internal combustion engine ignition timing or tachometers. They are used in brushless DC electric motors to detect the position of the permanent magnet. In the pictured wheel carrying two equally spaced magnets, the voltage from the sensor will peak twice for each revolution. This arrangement is commonly used to regulate the speed of disc drives.
A hall probe contains an
indium compound crystal mounted on an aluminum backing plate, and encapsulated in the probe head. The plane of the crystal is perpendicular to the probe handle. Connecting leads from the crystal are brought down through the handle to the circuit box.When the Hall Probe is held so that the magnetic field lines are passing at right angles through the sensor of the probe, the meter gives a reading of the value of magnetic flux density (B). A current is passed through the crystal which, when placed in a magnetic field has a “Hall Effect” voltage developed across it. The Hall Effect is seen when a conductor is passed through a uniform magnetic field. The natural electron drift of the charge carriers causes the magnetic field to apply a Lorentz force (the force exerted on a charged particle in an electromagnetic field) to these charge carriers. The result is what is seen as a charge separation, with a build up of either positive or negative charges on the bottom or on the top of the plate. The crystal measures 5 mm square. The probe handle, being made of a non-ferrous material, has no disturbing effect on the field.A Hall Probe is sensitive enough to measure the Earth's magnetic field. It must be held so that the Earth's field lines are passing directly through it. It is then rotated quickly so the field lines pass through the sensor in the opposite direction. The change in the flux density reading is double the Earth's magnetic flux density. A hall probe must first be calibrated against a known value of magnetic field strength. For a solenoid the hall probe is placed in the center

Thursday, December 3, 2009

PID controller

A proportional–integral–derivative controller (PID controller) is a generic control loop feedback mechanism (controller) widely used in industrial control systems. A PID controller attempts to correct the error between a measured process variable and a desired setpoint by calculating and then instigating a corrective action that can adjust the process accordingly and rapidly, to keep the error minimal.

The PID controller calculation (algorithm) involves three separate parameters; the proportional, the integral and derivative values. The proportional value determines the reaction to the current error, the integral value determines the reaction based on the sum of recent errors, and the derivative value determines the reaction based on the rate at which the error has been changing. The weighted sum of these three actions is used to adjust the process via a control element such as the position of a control valve or the power supply of a heating element.By tuning the three constants in the PID controller algorithm, the controller can provide control action designed for specific process requirements. The response of the controller can be described in terms of the responsiveness of the controller to an error, the degree to which the controller overshoots the setpoint and the degree of system oscillation. Note that the use of the PID algorithm for control does not guarantee optimal control of the system or system stability.
Some applications may require using only one or two modes to provide the appropriate system control. This is achieved by setting the gain of undesired control outputs to zero. A PID controller will be called a PI, PD, P or I controller in the absence of the respective control actions. PI controllers are particularly common, since derivative action is very sensitive to measurement noise, and the absence of an integral value may prevent the system from reaching its target value due to the control action.
Note: Due to the diversity of the field of control theory and application, many naming conventions for the relevant variables are in common use.
A familiar example of a control loop is the action taken to keep one's shower water at the ideal temperature, which typically involves the mixing of two process streams, cold and hot water. The person feels the water to estimate its temperature. Based on this measurement they perform a control action: use the cold water tap to adjust the process. The person would repeat this input-output control loop, adjusting the hot water flow until the process temperature stabilized at the desired value.Feeling the water temperature is taking a measurement of the process value or process variable (PV). The desired temperature is called the setpoint (SP). The output from the controller and input to the process (the tap position) is called the manipulated variable (MV). The difference between the measurement and the setpoint is the error (e), too hot or too cold and by how much.As a controller, one decides roughly how much to change the tap position (MV) after one determines the temperature (PV), and therefore the error. This first estimate is the equivalent of the proportional action of a PID controller. The integral action of a PID controller can be thought of as gradually adjusting the temperature when it is almost right. Derivative action can be thought of as noticing the water temperature is getting hotter or colder, and how fast, anticipating further change and tempering adjustments for a soft landing at the desired temperature (SP).Making a change that is too large when the error is small is equivalent to a high gain controller and will lead to overshoot. If the controller were to repeatedly make changes that were too large and repeatedly overshoot the target, the output would oscillate around the setpoint in either a constant, growing, or decaying sinusoid. If the oscillations increase with time then the system is unstable, whereas if they decay the system is stable. If the oscillations remain at a constant magnitude the system is marginally stable. A human would not do this because we are adaptive controllers, learning from the process history, but PID controllers do not have the ability to learn and must be set up correctly. Selecting the correct gains for effective control is known as tuning the controller.If a controller starts from a stable state at zero error (PV = SP), then further changes by the controller will be in response to changes in other measured or unmeasured inputs to the process that impact on the process, and hence on the PV. Variables that impact on the process other than the MV are known as disturbances. Generally controllers are used to reject disturbances and/or implement setpoint changes. Changes in feed water temperature constitute a disturbance to the shower process.In theory, a controller can be used to control any process which has a measurable output (PV), a known ideal value for that output (SP) and an input to the process (MV) that will affect the relevant PV. Controllers are used in industry to regulate temperature, pressure, flow rate, chemical composition, speed and practically every other variable for which a measurement exists. Automobile cruise control is an example of a process which utilizes automated control.
Due to their long history, simplicity, well grounded theory and simple setup and maintenance requirements, PID controllers are the controllers of choice for many of these applications.

Wednesday, December 2, 2009


Inductance is the property in an electrical circuit where a change in the electric current through that circuit induces an electromotive force (EMF) that opposes the change in current (See Induced EMF).In electrical circuits, any electric current i produces a magnetic field and hence generates a total magnetic flux Φ acting on the circuit. This magnetic flux, due to Lenz's law, tends to act to oppose changes in the flux by generating a voltage (a back EMF) in the circuit that counters or tends to reduce the rate of change in the current. The ratio of the magnetic flux to the current is called the self-inductance, which is usually simply referred to as the inductance of the circuit. To add inductance to a circuit, electronic components called inductors are used, which consist of coils of wire to concentrate the magnetic field.The term 'inductance' was coined by Oliver Heaviside in February 1886.It is customary to use the symbol L for inductance, possibly in honour of the physicist Heinrich Lenz.

Friday, November 27, 2009

Electromagnetic spectrum

The electromagnetic spectrum is the range of all possible frequencies of electromagnetic radiation. The "electromagnetic spectrum" of an object is the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object.The electromagnetic spectrum extends from below frequencies used for modern radio to gamma radiation at the short-wavelength end, covering wavelengths from thousands of kilometers down to a fraction of the size of an atom. The long wavelength limit is the size of the universe itself, while it is thought that the short wavelength limit is in the vicinity of the Planck length, although in principle the spectrum is infinite and continuous.

While the classification scheme is generally accurate, in reality there is often some overlap between neighboring types of electromagnetic energy. For example, radio waves at 60 Hz may be received and studied by astronomers, or may be ducted along wires as electric power.
The distinction between X and gamma rays is based on sources: gamma rays are the photons generated from nuclear decay or other nuclear and /particle process, whereas X-rays are generated by electronic transitions involving highly energetic inner atomic electrons.
Also, the region of the spectrum of the particular electromagnetic radiation is reference-frame dependent (on account of the Doppler shift for light) so EM radiation which one observer would say is in one region of the spectrum could appear to an observer moving at a substantial fraction of the speed of light with respect to the first to be in another part of the spectrum. For example, consider the cosmic microwave background. It was produced, when matter and radiation decoupled, by the de-excitation of hydrogen atoms to the ground state. These photons were from Lyman series transitions, putting them in the ultraviolet (UV) part of the electromagnetic spectrum. Now this radiation has undergone enough cosmological red shift to put it into the microwave region of the spectrum for observers moving slowly (compared to the speed of light) with respect to the cosmos. However, for particles moving near the speed of light, this radiation will be blue-shifted in their rest frame. The highest energy cosmic ray protons are moving such that, in their rest frame, this radiation is to high energy gamma rays which interact with the proton to produce bound pairs . This is the source of the limit.