You are on page 1of 77

Metric prefixes Since[n

2]

Prefix Symbol 1000m yotta zetta exa peta tera giga mega kilo hecto deca deci centi milli micro nano pico femto atto zepto yocto
1. 2.

10n 1024 1021 1018 1015 1012 109 106 103 102 101 100 101 102 103 106 109 1012 1015 1018 1021 1024

Decimal

US English word[n 1]

Y Z E P T G M k h da d c m n p f a z y

10008 10007 10006 10005 10004 10003 10002 10001 10002/3 10001/3 10000 10001/3 10002/3 10001 10002 10003 10004 10005 10006 10007 10008

1000000000000000000000000 septillion 1000000000000000000000 sextillion 1000000000000000000 quintillion 1000000000000000 quadrillion 1000000000000 trillion 1000000000 billion 1000000 million 1000 thousand 100 hundred 10 ten 1 one 0.1 tenth 0.01 hundredth 0.001 thousandth 0.000001 millionth 0.000000001 billionth 0.000000000001 trillionth 0.000000000000001 quadrillionth 0.000000000000000001 quintillionth 0.000000000000000000001 sextillionth 0.000000000000000000000001 septillionth

1991 1991 1975 1975 1960 1960 1960 1795 1795 1795 1795 1795 1795 1960 1960 1960 1964 1964 1991 1991

^ This table uses the short scale. ^ The metric system was introduced in 1795 with six prefixes. The other dates relate to recognition by a resolution of the CGPM.

For a steady flow of charge through a surface, the current I (in amperes) can be calculated with the following equation:

Where Q is the electric charge in coulombs I is current in amperes. t is time in seconds More generally, electric current can be represented as the rate at which charge flows through a given surface as:

The coulomb (unit symbol: C) (unit: metre-kg-sec) is the SI derived unit of electric charge (symbol: Q or q). It is defined as the charge transported by a constant current of one ampere in one second:

One coulomb is also the amount of excess charge on the positive side of a capacitor of one farad charged to a potential difference of one volt:

Ampere is a measure of the amount of electric charge passing a point in an electric circuit
per unit time with 6.241 1018 electrons, or one coulomb per second constituting one ampere. A current of one ampere is one coulomb of charge going past a given point per second:

Q is determined by steady current I flowing for a time t as Q = It. The SI unit of electric charge is the coulomb (C), although in electrical engineering it is also common to use the ampere-hour(Ah), and in chemistry it is common to use the elementary charge (e) as a unit. The symbol Q is often used to denote a charge. The study of how charged substances interact is classical electrodynamics, which is accurate insofar as quantum effects can be ignored. The electric charge is a fundamental conserved property of some subatomic particles, which determines their electromagnetic interaction. Electrically charged matter is influenced by, and produces, electromagnetic fields. The interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces

Resistive heating
Main article: Joule heating Joule heating, also known as ohmic heating and resistive heating, is the process by which the passage of an electric current through a conductor releases heat. It was first studied by James Prescott Joule in 1841. Joule immersed a length of wire in a fixed mass of water and measured the temperature rise due to a known current flowing through the wire for a 30 minute period. By varying the current and the length of the wire he deduced that the heat produced was proportional to the square of the current multiplied by the electrical resistance of the wire.

This relationship is known as Joule's First Law. The SI unit of energy was subsequently named the joule and given the symbol J. The commonly known unit of power, the watt, is equivalent to one joule per second. Energy (1 kWh = 3600 kJ) is that it is the capacity of a system to perform work. The definition of work in physics is the movement of a force through a distance, and energy is measured in the same units as work. If a person pushes an object x metersagainst an opposing force of F newtons, Fx joules (newton-meters) of work has been done on the object; the person's body has lost Fxjoules of energy, and the object has gained Fx joules of energy. The SI unit of energy is the joule (J) (equivalent to a newton-meter or awatt-second); the CGS unit is the erg, and the Imperial unit is the foot pound. Other energy units such as the electron volt, calorie, BTU, and kilowatt-hour (1 kWh = 3600 kJ) are used in specific areas of science and commerce. Work: In physics, a force is said to do work when it acts on a body so that there is a displacement of the point of application in the direction of the force. Thus a force does work when it results in movement.[1] The term work was introduced in 1826 by the French mathematician Gaspard-Gustave Coriolis[2][3] as "weight lifted through a height", which is based on the use of early steam engines to lift buckets of water out of flooded ore mines. The SI unit of work is the newtonmetre or joule (J). The work done by a constant force of magnitude F on a point that moves a displacement d in the direction of the force is the product,

For example, if a force of 10 newton (F = 10 N) acts along point that travels 2 metres (d = 2 m), then it does the work W = (10 N)(2 m) = 20 N m = 20 J. This is approximately the work done lifting a 1 kg weight from ground to over a person's head against the force of gravity. Notice that the work is doubled either by lifting twice the weight the same distance or by lifting the same weight twice the distance.

The joule
1 J in... is equal to...
2 2

SI base units

1 kgm /s
7

CGS units

110 erg
7

kilowatt hours

2.7810

kWh

kilocalories

2.3910

kcal

BTUs

9.4810

BTU

electronvolts

6.2410

18

eV

1 joule is equal to:


1107 ergs (exactly) 6.241509741018 eV (electronvolts) 0.2390 cal (thermochemical gram calories or small calories) 2.3901104 kcal (thermochemical kilocalories, kilogram calories, large calories or food calories) 9.4782104 BTU (British thermal unit) 0.7376 ftlb (foot-pounds) 23.7 ftpdl (foot-poundals) 2.7778107 kilowatt-hour 2.7778104 watt-hour 9.8692103 litre-atmosphere 11.1265 femtograms (mass-energy equivalence) 11044 foe (exactly)

Units defined exactly in terms of the joule include:


1 thermochemical calorie = 4.184 J[14] 1 International Table calorie = 4.1868 J[14] 1 watt hour = 3600 J 1 kilowatt hour = 3.6106 J (or 3.6 MJ) 1 watt second = 1 J 1 ton TNT = 4.184 GJ

symbol J, is a derived unit of energy, work, or amount of heat in the International System of Units.[1] It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre (1 newton metre or Nm), or in passing an electric current of one ampere through a resistance of one ohm for one second. It is named after the English physicist James Prescott Joule (18181889).[2][3][4] In terms firstly of base SI units and then in terms of other SI units:

where N is the newton, m is the metre, kg is the kilogram, s is the second, Pa is the pascal, W is the watt, C is the coulomb, and V is the volt. One joule can also be defined as:

The work required to move an electric charge of one coulomb through an electrical potential difference of one volt, or one '"coulomb volt" (CV). This relationship can be used to define the volt. The work required to produce one watt of power for one second, or one "watt second" (Ws) (compare kilowatt hour). This relationship can be used to define the watt. the electron volt (symbol eV; also written electronvolt[1][2]) is a unit of energy equal to approximately 1.61019 joule (symbol J). By definition, it is the amount of energy gained (or lost) by the charge of a single electron moved across an electric potential difference of one volt. Thus it is 1 volt (1 joule per coulomb, 1 J/C) multiplied by the negative of the electron charge (e, or1.602176565(35)1019 C). Therefore, one electron volt is equal to 1.602176565(35)1019 J.[3] Historically, the electron volt was devised as a standard unit of measure through its usefulness in electrostatic particle accelerator sciences because a particle with charge q has an energy E = qV after passing through the potential V; if q is quoted in integer units of theelementary charge and the terminal bias in volts, one gets an energy in eV. The electron volt is not an SI unit, and thus its value in SI units must be obtained experimentally.[4] Like the elementary charge on which it is based, it is not an independent quantity but is equal to 1 J/C(2ha/ 0c0). It is a common unit of energy within physics, widely used in solid state, atomic, nuclear, and particle physics. It is commonly used with the SI prefixes milli-, kilo-, mega-, giga-, tera-, peta- or exa- (meV, keV, MeV, GeV, TeV, PeV and EeV respectively). Thus meV stands for milli-electron volt.

In some older documents, and in the name Bevatron, the symbol BeV is used, which stands for billion electron volts; it is equivalent to the GeV.

The watt (/wt/ WOT; symbol: W)(1W=1kg.m^2/s^3) is a derived unit of power in the International System of Units (SI), named after the Scottish engineer James Watt (1736 1819). The unit, defined as one joule per second, measures the rate of energy conversion or transfer. One watt is the rate at which work is done when an object's velocity is held constant at one meter per second against constant opposing force of one newton.

In terms of electromagnetism, one watt is the rate at which work is done when one ampere (A) of current flows through an electrical potential difference of one volt (V).

Two additional unit conversions for watt can be found using the above equation and Ohm's Law.

Where ohm ( ) is the SI derived unit of electrical resistance. power (symbol: P) is defined as the amount of energy consumed per unit time. In the MKS system, the unit of power is the joule per second (J/s), known as the watt (in honour of James Watt, the eighteenth-century developer of the steam engine). For example, the rate at which a light bulb converts electrical energy into heat and light is measured in wattsthe more wattage, the more power, or equivalently the more electrical energy is used per unit time.[1][2] Energy transfer can be used to do work, so power is also the rate at which this work is performed. The same amount of work is done when carrying a load up a flight of stairs whether the person carrying it walks or runs, but more power is expended during the running because the work is done in a shorter amount of time. The output power of an electric motor is the product of the torque the motor generates and the angular velocity of its output shaft. The power expended to move a vehicle is the product of the traction force of the wheels and the velocity of the vehicle. The integral of power over time defines the work done. Because this integral depends on the trajectory of the point of application of the force and torque, this calculation of work is said to be path dependent. As a simple example, burning a kilogram of coal releases much more energy than does detonating a kilogram of TNT,[3] but because the TNT reaction releases energy much more quickly, it delivers far more power than the coal. If W is the amount of work performed during a period of time of duration t, the average power Pavg over that period is given by the formula

It is the average amount of work done or energy converted per unit of time. The average power is often simply called "power" when the context makes it clear. Horsepower (hp) is the name of several units of measurement of power, the rate at which work is done. The most common conversion factor, especially for electrical power, is 1 hp = 746 watts. The term was adopted in the late 18th century by Scottish engineer James Watt to compare the output of steam engines with the power of draft horses. It was later

expanded to include the output power of other types of piston engines, as well as turbines, electric motors and other machinery.[1][2] The definition of the unit varied between geographical regions. Most countries now use the SIunit watt for measurement of power. With the implementation of the EU Directive 80/181/EEC on January 1, 2010, the use of horsepower in the EU is only permitted as supplementary unit The development of the steam engine provided a reason to compare the output of horses with that of the engines that could replace them. In 1702, Thomas Savery wrote in The Miner's Friend:[6]"So that an engine which will raise as much water as two horses, working together at one time in such a work, can do, and for which there must be constantly kept ten or twelve horses for doing the same. Then I say, such an engine may be made large enough to do the work required in employing eight, ten, fifteen, or twenty horses to be constantly maintained and kept for doing such a work" The idea was later used by James Watt to help market his improved steam engine. He had previously agreed to take royalties of one third of the savings in coal from the older Newcomen steam engines.[7] This royalty scheme did not work with customers who did not have existing steam engines but used horses instead. Watt determined that a horse could turn a mill wheel 144 times in an hour (or 2.4 times a minute). The wheel was 12 feet in radius; therefore, the horse travelled 2.4 2 12 feet in one minute. Watt judged that the horse could pull with a force of 180 pounds. So:

calorie is used for two units of energy.( energy equal to exactly 4.184 joules)

The small calorie or gram calorie (symbol: cal) is the approximate amount of energy needed to raise the temperature of one gram of water by one degree Celsius.[1] The large calorie, kilogram calorie, dietary calorie, nutritionist's calorie or food calorie (symbol: Cal, equiv: kcal), which is the amount of energy needed to raise the temperature of one kilogram of water by one degree Celsius. The large calorie is thus equal to 1000 small calories or one kilocalorie (symbol: kcal).[1]

Although these units are part of the metric system, they now have been superseded in the International System of Units by the joule. One small calorie is approximately 4.2 joules (one large calorie or kilocalorie is therefore approximately 4.2 kilojoules). The factors used to convert calories to joules are numerically equivalent to expressions of the specific heat capacity of water in joules per gram or per kilogram. The conversion factor depends on the definition adopted. In spite of its non-official status, the large calorie is still widely used as a unit of food energy in the US, UK and some other Western countries. The small calorie is also often used in chemistry as the method of measurement is fairly straightforward in most reactions, though the amounts involved are typically expressed in thousands as kcal, an equivalent unit to the large calorie.

The calorie was first defined by Nicolas Clment in 1824 as a unit of heat,[2] and entered French and English dictionaries between 1841 and 1867. The word comes from Latin calor meaning "heat". The ohm (symbol: ) is the SI derived unit of electrical resistance, named after German physicist Georg Simon Ohm. Although several empirically derived standard units for expressing electrical resistance were developed in connection with early telegraphy practice, the British Association for the Advancement of Science proposed a unit derived from existing units of mass, length and time and of a convenient size for practical work as early as 1861. The definition of the "ohm" unit was revised several times. Today the value of the ohm is expressed in terms of the quantum Hall effect. The ohm is defined as a resistance between two points of a conductor when a constant potential difference of 1 volt, applied to these points, produces in the conductor a current of 1 ampere, the conductor not being the seat of any electromotive force.[1]

where: V = volt A = ampere m = metre kg = kilogram s = second C = coulomb J = joule S = siemens F = farad In many cases the resistance of a conductor in ohms is approximately constant within a certain range of voltages, temperatures, and other parameters; one speaks of linear resistors. In other cases resistance varies (e.g., thermistors). Commonly used multiples and submultiples in electrical and electronic usage are the milliohm, kilohm, megohm, and gigaohm,[2] though the term 'gigohm', though not official, is in common use for the latter.[3] In alternating current circuits, electrical impedance is also measured in ohms. An ion is an atom or molecule in which the total number of electrons is not equal to the total number of protons, giving the atom a net positive or negative electrical charge. Ions can be created by both chemical and physical means. In chemical terms, if a neutral atom loses one or more electrons, it has a net positive charge and is known as a cation. If an atom gains electrons, it has a net negative charge and is known as an anion. An ion

consisting of a single atom is an atomic or monatomic ion; if it consists of two or more atoms, it is a molecular orpolyatomic ion. In the case of physical ionization of a medium, such as a gas, what are known as "ion pairs" are created by ion impact, and each pair consists of a free electron and a positive ion.[ "Cation" and "Anion" redirect here. For the particle physics/quantum computing concept, see Anyon. For other uses, see Ion (disambiguation). Since the electric charge on a proton is equal in magnitude to the charge on an electron, the net electric charge on an ion is equal to the number of protons in the ion minus the number of electrons. An anion () (/n.a.n/ AN-eye-n), from the Greek word (n), meaning "up", is an ion with more electrons than protons, giving it a net negative charge (since electrons are negatively charged and protons are positively charged). A cation (+) (/kt.a.n/ KAT-eye-n), from the Greek word (kat), meaning "down", is an ion with fewer electrons than protons, giving it a positive charge. There are additional names used for ions with multiple charges. For example, an ion with a 2 charge is known as adianion and an ion with a +2 charge is known as a dication. A zwitterion is a neutral molecule with positive and negative charges at different locations within that molecule. Voltage, electrical potential difference, electric tension or electric pressure (denoted V and measured in units of electric potential: volts, or joules per coulomb) is the electric potential difference between two points, or the difference in electric potential energy of a unit chargetransported between two points.[1] Voltage is equal to the work done per unit charge against a static electric field to move the charge between two points. A voltage may represent either a source of energy (electromotive force), or lost, used, or stored energy (potential drop). A voltmetercan be used to measure the voltage (or potential difference) between two points in a system; usually a common reference potential such as theground of the system is used as one of the points. Voltage can be caused by static electric fields, by electric current through a magnetic field, by time-varying magnetic fields, or some combination of these three. Given two points in the space, called A and B, voltage is the difference of electric potentials between those two points. From the definition of electric potential it follows that:

Voltage is electric potential energy per unit charge, measured in joules per coulomb ( = volts). It is often referred to as "electric potential", which then must be distinguished from electric potential energy by noting that the "potential" is a "per-unit-charge" quantity. Like mechanical potential energy, the zero of potential can be chosen at any point, so the difference in voltage is the quantity which is physically meaningful. The difference in voltage measured when moving from point A to point B is equal to the work which would have to be done, per unit charge, against the electric field to move the charge from A to

B. The voltage between the two ends of a path is the total energy required to move a small electric charge along that path, divided by the magnitude of the charge. Mathematically this is expressed as the line integral of the electric field and the time rate of change of magnetic field along that path. In the general case, both a static (unchanging) electric field and a dynamic (time-varying) electromagnetic field must be included in determining the voltage between two points.

Electromotive force, also called emf[1] (denoted and measured in volts), refers to voltage generated by a battery or by the magnetic force according toFaraday's Law, which states that a time varying magnetic field induces an electric current.[2] The electromotive "force" is not a force, as force is measured in newtons, but a potential, or energy per unit of charge, measured in volts. In electromagnetic induction, emf can be defined around a closed loop as the electromagnetic work that would be transferred to a unit of charge if it travels once around that loop.[3] (While the charge travels around the loop, it can simultaneously lose the energy via resistance into thermal energy.) For a time-varying magnetic flux impinging a loop, the electric potential scalar field is not defined due to circulating electric vector field, but nevertheless an emf does work that can be measured as a virtual electric potential around that loop.[4] In a two-terminal device (such as an electrochemical cell or electromagnetic generator), the emf can be measured as voltage across the two open-circuited terminals. The created electrical potential difference drives current flow if a circuit is attached to the source of emf. When current flows, however, the voltage across the terminals of the source of emf is no longer the open-circuit value, due to voltage drops inside the device due to its internal resistance. Devices that can provide emf include electrochemical cells, thermoelectric devices, solar cells, electrical generators, transformers, and even Van de Graaff generators.[4][5] In nature, emf is generated whenever magnetic field fluctuations occur through a surface. An example for this is the varying Earth magnetic field during a geomagnetic storm, acting on anything on the surface of the planet, like an extended electrical grid. In the case of a battery, charge separation that gives rise to a voltage difference is accomplished by chemical reactions at the electrodes.[6] Chemically, by separating positive and negative charges, an electric field can be produced, leading to an electric potential difference.[7][6] A voltaic cell can be thought of as having a "charge pump" of atomic dimensions at each electrode, that is:[8] A source of emf can be thought of as a kind of charge pump that acts to move positive charge from a point of low potential through its interior to a point of high potential. By chemical, mechanical or other means, the source of emf performs work dW on that charge to

move it to the high potential terminal. The emf of the source is defined as the workdW done per charge dq: = dW/dq. Around 1830 Faraday established that the reactions at each of the two electrodeelectrolyte interfaces provide the "seat of emf" for the voltaic cell, that is, these reactions drive the current.[9] In the open-circuit case, charge separation continues until the electrical field from the separated charges is sufficient to arrest the reactions. Years earlier, Volta, who had measured a contact potential difference at the metal-metal (electrode-electrode) interface of his cells, held the incorrect opinion that this contact potential was the origin of the seat of emf. In the case of an electrical generator, a time-varying magnetic field inside the generator creates an electric field via electromagnetic induction, which in turn creates an energy difference between generator terminals. Charge separation takes place within the generator, with electrons flowing away from one terminal and toward the other, until, in the open-circuit case, sufficient electric field builds up to make further movement unfavorable. Again the emf is countered by the electrical voltage due to charge separation. If a load is attached, this voltage can drive a current. The general principle governing the emf in such electrical machines is Faraday's law of induction. A solar cell or photodiode is another source of emf, with light energy as the external power source.

Alternating currents are accompanied (or caused) by alternating voltages. An AC voltage v can be described mathematically as a function of time by the following equation: , where

is the peak voltage (unit: volt), is the angular frequency (unit: radians per second) The angular frequency is related to the physical frequency, is the time (unit: second). (unit = hertz), . which represents the number of cycles per second, by the equation

The peak-to-peak value of an AC voltage is defined as the difference between its positive peak and its negative peak. Since the maximum value of minimum value is 1, an AC voltage swings between to-peak voltage, usually written as therefore or . , is and is +1 and the . The peak-

Power and root mean square[edit source | editbeta]


Main article: AC power

Alternating Current (green curve). The horizontal axis measures time; the vertical, current or voltage.

A sine wave, over one cycle (360). The dashed line represents the root mean square (RMS) value at about 0.707

It has been suggested that portions of this section be movedinto AC power. (Discuss)

The relationship between voltage and the power delivered is

where

represents a load resistance.

Rather than using instantaneous power, , it is more practical to use a time averaged power (where the averaging is performed over any integer number of cycles). Therefore, AC voltage is often expressed as a root mean square (RMS) value, written as , because

For a sinusoidal voltage:

The factor waveforms.

is called the crest factor, which varies for different

For a triangle waveform centered about zero

For a square waveform centered about zero

For an arbitrary periodic waveform

of period

Example[edit source | editbeta]


To illustrate these concepts, consider a 230 V AC mains supply used in many countries around the world. It is so called because its root mean square value is 230 V. This means that the time-averaged power delivered is equivalent to the power delivered by a DC voltage of 230 V. To determine the peak voltage (amplitude), we can rearrange the above equation to:

For 230 V AC, the peak voltage is therefore , which is about 325 V. The peak-to-peak value of the 230 V AC is double that, at about 650 V.

root mean square (abbreviated RMS or rms), also known as the quadratic mean, is a statistical measure of the magnitude of a varying quantity. It is especially useful when variates are positive and negative, e.g., sinusoids. RMS is used in various fields, including electrical engineering. It can be calculated for a series of discrete values or for a continuously varying function. Its name comes from its definition as the square root of the mean of the squares of the values. It is a special case of the generalized mean with the exponent p = 2.

Definition[edit source | editbeta]


The RMS value of a set of values (or a continuous-time waveform) is the square root of the arithmetic mean (average) of the squares of the original values (or the square of the function that defines the continuous waveform). In the case of a set of values , the RMS value is given by this formula:

The corresponding formula for a continuous function (or waveform) the interval is

defined over

and the RMS for a function over all time is

The RMS over all time of a periodic function is equal to the RMS of one period of the function. The RMS value of a continuous function or signal can be approximated by taking the RMS of a series of equally spaced samples. Additionally, the RMS value of various waveforms can also be determined without calculus, as shown by Cartwright.[1] In the case of the RMS statistic of a random process, the expected value is used instead of the mean.

RMS of common waveforms

Waveform

Equation

RMS

DC, constant

Sine wave

Square wave

DC Shifted Square wave

Modified square wave

Triangle wave

Sawtooth wave

Pulse train

Notes: t is time f is frequency a is amplitude (peak value) D is the duty cycle or the percent(%) spent high of the period (1/f) {r} is the fractional part of r

Waveforms made by summing known simple waveforms have an RMS that is the root of the sum of squares of the component RMS values, if the component waveforms are orthogonal (that is, if the average of the product of one simple waveform with another is zero for all pairs other than a waveform times itself).

A special case of this, particularly helpful in electrical engineering, is

where refers to the DC component of the signal and the AC component of the signal.

is

Average electrical power[edit source | editbeta]


Main article: AC power
It has been suggested that portions of this section be movedinto AC power. (Discuss)

Electrical engineers often need to know the power, , dissipated by an electrical resistance, . It is easy to do the calculation when there is a constant current, , through the resistance. For a load of ohms, power is defined simply as:

However, if the current is a time-varying function, , this formula must be extended to reflect the fact that the current (and thus the instantaneous power) is varying over time. If the function is periodic (such as household AC power), it is still meaningful to talk about the average power dissipated over time, which we calculate by taking the average power dissipation: (where denotes the mean of a function)

(as R does not vary over time, it can be factored out) (by definition of RMS) So, the RMS value, , of the function is the constant current that yields the .

same power dissipation as the time-averaged power dissipation of the current We can also show by the same method that for a time-varying voltage, RMS value , , with

This equation can be used for any periodic waveform, such as a sinusoidal or sawtooth waveform, allowing us to calculate the mean power delivered into a specified load. By taking the square root of both these equations and multiplying them together, we get the equation

Both derivations depend on voltage and current being proportional (i.e., the load, R, is purely resistive). Reactive loads (i.e., loads capable of not just dissipating energy but also storing it) are discussed under the topic of AC power. In the common case of alternating current when is a sinusoidal current, as is approximately true for mains power, the RMS value is easy to calculate from the continuous case equation above. If we define to be the peak current, then:

where t is time and is the angular frequency ( = 2/T, where T is the period of the wave). Since is a positive constant:

Using a trigonometric identity to eliminate squaring of trig function:

but since the interval is a whole number of complete cycles (per definition of RMS), the terms will cancel out, leaving:

A similar analysis leads to the analogous equation for sinusoidal voltage:

Where represents the peak current and represents the peak voltage. It bears repeating that these two solutions are for a sinusoidal wave only. Because of their usefulness in carrying out power calculations, listed voltages for power outlets, e.g. 120 V (USA) or 230 V (Europe), are almost always quoted in RMS values, and

not peak values. Peak values can be calculated from RMS values from the above formula, which implies Vp = VRMS 2, assuming the source is a pure sine wave. Thus the peak value of the mains voltage in the USA is about 120 2, or about 170 volts. The peak-to-peak voltage, being twice this, is about 340 volts. A similar calculation indicates that the peak-topeak mains voltage in Europe is about 650 volts. It is also possible to calculate the RMS power of a signal. By analogy with RMS voltage and RMS current, RMS power is the square root of the mean of the square of the power over some specified time period. This quantity, which would be expressed in units of watts (RMS), has no physical significance. However, the term "RMS power" is sometimes used in the audio industry as a synonym for "mean power" or "average power". For a discussion of audio power measurements and their shortcomings, see Audio power.

Root-mean-square speed[edit source | editbeta]


Main article: Root-mean-square speed In the physics of gas molecules, the root-mean-square speed is defined as the square root of the average squared-speed. The RMS speed of an ideal gas is calculated using the following equation:

where represents the ideal gas constant, 8.314 J/(molK), is the temperature of the gas in kelvins, and is the molar mass of the gas in kilograms. The generally accepted terminology for speed as compared to velocity is that the former is the scalar magnitude of the latter. Therefore, although the average speed is between zero and the RMS speed, the average velocity for a stationary gas is zero.

Root-mean-square error[edit source | editbeta]


Main article: Root-mean-square error When two data setsone set from theoretical prediction and the other from actual measurement of some physical variable, for instanceare compared, the RMS of the pairwise differences of the two data sets can serve as a measure how far on average the error is from 0. The mean of the pairwise differences does not measure the variability of the difference, and the variability as indicated by the standard deviation is around the mean instead of 0. Therefore, the RMS of the differences is a meaningful measure of the error.

RMS in frequency domain[edit source | editbeta]


The RMS can be computed in the frequency domain, using Parseval's theorem. For a sampled signal,

, where and is number of samples.

In this case, the RMS computed in the time domain is the same as in the frequency domain:

Frequency is the number of occurrences of a repeating event per unit time. It is also referred to as temporal frequency, which emphasizes the contrast to spatial frequency and angular frequency. The period is the duration of one cycle in a repeating event, so the period is the reciprocal of the frequency. For example, if a newborn baby's heart beats at a frequency of 120 times a minute, its period (the interval between beats) is half a second.

Definitions and units[edit source | editbeta]


For cyclical processes, such as rotation, oscillations, or waves, frequency is defined as a number of cycles per unit time. In physics and engineeringdisciplines, such as optics, acoustics, and radio, frequency is usually denoted by a Latin letter f or by the Greek letter (nu). Note, the related concept, angular frequency, is usually denoted by the Greek letter (omega), which uses the SI unit radians per second (rad/s). For counts per unit of time, the SI unit for frequency is the hertz (Hz), named after the German physicist Heinrich Hertz; 1 Hz means that an event repeats once per second. A previous name for this unit was cycles per second. A traditional unit of measure used with rotating mechanical devices is revolutions per minute, abbreviated RPM. 60 RPM equals one hertz.[1] The period, usually denoted by T, is the length of time taken by one cycle, and is the reciprocal of the frequency f:

The SI unit for period is the second.

Measurement[edit source | editbeta]

Sinusoidal waves of various frequencies; the bottom waves have higher frequencies than those above. The horizontal axis represents time.

By counting[edit source | editbeta]


Calculating the frequency of a repeating event is accomplished by counting the number of times that event occurs within a specific time period, then dividing the count by the length of the time period. For example, if 71 events occur within 15 seconds the frequency is:

If the number of counts is not very large, it is more accurate to measure the time interval for a predetermined number of occurrences, rather than the number of occurrences within a specified time.[2] The latter method introduces a random error into the count of between zero and one count, so onaverage half a count. This is called gating error and causes an average error in the calculated frequency of f = 1/(2 Tm), or a fractional error off / f = 1/(2 f Tm) where Tm is the timing interval and f is the measured frequency. This error decreases with frequency, so it is a problem at low frequencies where the number of counts N is small.

Other types of frequency[edit source | editbeta]

Angular frequency is defined as the rate of change of angular displacement, , (during rotation), or the rate of change of the phase of a sinusoidal waveform (e.g. in oscillations and waves), or as the rate of change of the argument to the sine function:

Angular frequency is commonly measured in radians per second (rad/s) but, for discrete-time signals, can also be expressed as radians per sample time, which is a dimensionless quantity.
Spatial frequency is analogous to temporal frequency, but the time axis is

replaced by one or more spatial displacement axes. E.g.:

Wavenumber, k, sometimes means the spatial frequency analogue of angular temporal frequency. In case of more than one spatial dimension, wavenumber is a vector quantity.

Electromagnetic induction is the production of a potential difference (voltage) across a conductor when it is exposed to a varying magnetic field. Michael Faraday is generally credited with the discovery of induction in 1831 though it may have been anticipated by the work of Francesco Zantedeschi in 1829.[1] Around 1830[2] to 1832,[3] Joseph Henry made a similar discovery, but did not publish his findings until later. Faraday's law of induction is a basic law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce anelectromotive force (EMF). It is the fundamental operating principle of transformers, inductors, and many types of electrical motors, generators andsolenoids.[4][5] The MaxwellFaraday equation is a generalisation of Faraday's law, and forms one of Maxwell's equations. Maxwell's equations are a set of partial differential equations that, together with the Lorentz force law, form the foundation of classical electrodynamics, classical optics, and electric circuits. These fields in turn underlie modern electrical and communications technologies. Maxwell's equations describe howelectric and magnetic fields are generated and altered by each other and by charges and currents. They are named after the Scottish physicist and mathematician James Clerk Maxwell who published an early form of those equations between 1861 and 1862. The equations have two major variants. The "microscopic" set of Maxwell's equations uses total charge and total current, including the complicated charges and currents in materials at the atomic scale; it has universal applicability, but may be unfeasible to calculate. The "macroscopic" set of Maxwell's equations defines two new auxiliary fields that describe large-scale behavior without having to consider these atomic scale details, but it requires the use of parameters characterizing the electromagnetic properties of the relevant materials. The term "Maxwell's equations" is often used for other forms of Maxwell's equations. For example, space-time formulations are commonly used in high energy and gravitational physics. These formulations defined on space-time, rather than space and time separately are manifestly[note 1] compatible withspecial and general relativity. In quantum mechanics, versions of Maxwell's equations based on the electric and magnetic potentials are preferred. Since the mid-20th century, it has been understood that Maxwell's equations are not exact laws of the universe, but are a classical approximation to the more accurate and fundamental theory of quantum electrodynamics. In most cases, though, quantum deviations from Maxwell's equations are immeasurably small. Exceptions occur when the particle nature of light is important or for very strong electric fields.

Conventional formulation in SI units[edit source | editbeta]


The equations in this section are given in the convention used with SI units. Other units commonly used are Gaussian units based on the cgs system,[1] LorentzHeaviside units (used mainly inparticle physics), and Planck units (used in theoretical physics). See below for the formulation with Gaussian units.

Name

Integral equations

Differential equations

Gauss's law

Gauss's law for magnetis m

Maxwell Faraday equation (Faraday' s law of induction )

Ampre's circuital law (with Maxwell' s correctio n)

There are universal constants appearing in the equations; in this case the permittivity of free space 0 and the permeability of free space 0, a general characteristic of fundamental field equations. In the differential equations, a local description of the fields, the nabla symbol denotes the three-dimensional gradient operator, and from it is the divergence operator and the curloperator. The sources are appropriately taken to be as local densities of charge and current. In the integral equations; a description of the fields within a region of space, is any fixed volume with boundary surface , and is any fixed open surface with boundary curve . Here "fixed" means the volume or surface do not change in time. Although it is possible to formulate Maxwell's equations with time-dependent surfaces and volumes, this is not actually necessary: the equations are correct and complete with time-

independent surfaces. The sources are correspondingly the total amounts of charge and current within these volumes and surfaces, found by integration. The volume integral of the total charge density over any fixed volume is the total electric charge contained in :

and the net electrical current is the surface integral of the electric current density J, passing through any open fixed surface :

where dS denotes the differential vector element of surface area S normal to surface . (Vector area is also denoted by A rather than S, but this conflicts with the magnetic potential, a separate vector field). The "total charge or current" refers to including free and bound charges, or free and bound currents. These are used in the macroscopic formulation below.

Relationship between differential and integral formulations[edit source | editbeta]


The differential and integral formulations of the equations are mathematically equivalent, by the divergence theorem in the case of Gauss's law and Gauss's law for magnetism, and by the KelvinStokes theorem in the case of Faraday's law and Ampre's law. Both the differential and integral formulations are useful. The integral formulation can often be used to simply and directly calculate fields from symmetric distributions of charges and currents. On the other hand, the differential formulation is a more natural starting point for calculating the fields in more complicated (less symmetric) situations, for example using finite element analysis.[2]

Flux and divergence[edit source | editbeta]

Closed volume and boundary , enclosing a source (+) and sink () of a vector field F. Here, F could be the E field with source electric charges, but not the B field which has no magnetic charges as shown. The outwardunit normal is n.

The "fields emanating from the sources" can be inferred from the surface integrals of the fields through the closed surface , defined as the electric flux and magnetic flux respectively:

as well as their divergences:

These surface integrals and divergences are connected by the divergence theorem.

Circulation and curl[edit source | editbeta]

Open surface and boundary . F could be the E or B fields. Again, n is the unit normal. (The curl of a vector field doesn't literally look like the "circulations", this is a heuristic depiction).

The "circulation of the fields" can be interpreted from the line integrals of the fields around the closed curve :

where d is the differential vector element of path length tangential to the path/curve, as well as their curls:

These line integrals and curls are connected by Stokes' theorem, and are analogous to quantities in classical fluid dynamics: the circulation of a fluid is the line integral of the fluid's flow velocity field around a closed loop, and the vorticity of the fluid is the curl of the velocity field.

Time evolution[edit source | editbeta]


The "dynamics" or "time evolution of the fields" is due to the partial derivatives of the fields with respect to time:

These derivatives are crucial for the prediction of field propagation in the form of electromagnetic waves. Since the surface is taken to be time-independent, we can make the following transition in Faraday's law:

see differentiation under the integral sign for more on this result.

Conceptual descriptions[edit source | editbeta]


Gauss's law[edit source | editbeta]
Gauss's law describes the relationship between a static electric field and the electric charges that cause it: The static electric field points away from positive charges and towards negative charges. In the field line description, electric field lines begin only at positive electric charges and end only at negative electric charges. 'Counting' the number of field lines passing though a closed surface, therefore, yields the total charge (including bound charge due to polarization of material) enclosed by that surface divided by dielectricity of free space (the vacuum permittivity). More technically, it relates the electric flux through any hypothetical closed "Gaussian surface" to the enclosed electric charge.

Gauss's law for magnetism: magnetic field lines never begin nor end but form loops or extend to infinity as shown here with the magnetic field due to a ring of current.

Gauss's law for magnetism[edit source | editbeta]


Gauss's law for magnetism states that there are no "magnetic charges" (also called magnetic monopoles), analogous to electric charges.[3] Instead, the magnetic field due to materials is generated by a configuration called a dipole. Magnetic dipoles are best represented as loops of current but resemble positive and negative 'magnetic charges', inseparably bound together, having no net 'magnetic charge'. In terms of field lines, this equation states that magnetic field lines neither begin nor end but make loops or extend to infinity and back. In other words, any magnetic field line that enters a given volume must somewhere exit that volume. Equivalent technical statements are that the sum total magnetic flux through any Gaussian surface is zero, or that the magnetic field is a solenoidal vector field.

Faraday's law[edit source | editbeta]

In a geomagnetic storm, a surge in the flux of charged particles temporarily alters Earth's magnetic field, which induces electric fields in Earth's atmosphere, thus causing surges in electrical power grids. Artist's rendition; sizes are not to scale.

Faraday's law describes how a time varying magnetic field creates ("induces") an electric field.[3] This dynamically induced electric field has closed field lines just as the magnetic field, if not superposed by a static (charge induced) electric field. This aspect of electromagnetic induction is the operating principle behind many electric generators: for example, a rotating bar magnet creates a changing magnetic field, which in turn generates an electric field in a nearby wire. (Note: there are two closely related equations which are called Faraday's law. The form used in Maxwell's equations is always valid but more restrictive than that originally formulated by Michael Faraday.)

Ampre's law with Maxwell's correction[edit source | editbeta]

An Wang's magnetic core memory(1954) is an application of Ampre's law. Each core stores one bit of data.

Ampre's law with Maxwell's correction states that magnetic fields can be generated in two ways: byelectrical current (this was the original "Ampre's law") and by changing electric fields (this was "Maxwell's correction"). Maxwell's correction to Ampre's law is particularly important: it shows that not only does a changing magnetic field induce an electric field, but also a changing electric field induces a magnetic field.[3][4] Therefore, these equations allow self-sustaining "electromagnetic waves" to travel through empty space (see electromagnetic wave equation). The speed calculated for electromagnetic waves, which could be predicted from experiments on charges and currents,[note 2] exactly matches the speed of light; indeed, light is one form of electromagnetic radiation (as are X-rays, radio waves, and others). Maxwell understood the connection between electromagnetic waves and light in 1861, thereby unifying the theories of electromagnetism and optics.

a phase vector, or phasor, is a representation of a sinusoidal function whose amplitude (A), frequency (), and phase () are time-invariant. It is a subset of a more general concept called analytic representation. Phasors separate the dependencies on A, , and into three independent factors. This can be particularly useful because the frequency factor (which includes the time-dependence of the sinusoid) is often common to all the components of a linear combination of sinusoids. In those situations, phasors allow this common feature to be factored out, leaving just the A and features. The result is that trigonometry reduces to algebra, and linear differential equations become algebraic ones. The term phasor therefore often refers to just those two factors. In older texts, aphasor is also referred to as a sinor.

An example of series RLC circuit and respective phasor diagram for a specific

Definition[edit source | editbeta]


Euler's formula indicates that sinusoids can be represented mathematically by the sum of two complex-valued functions:

[1]

or by the real part of one of the functions:

Already aforementioned, phasor can refer to either or just the complex constant, . In the latter case, it is understood to be a shorthand notation, encoding the amplitude and phase of an underlying sinusoid. An even more compact shorthand is angle notation:

A phasor can be considered a vector rotating about the origin in a complex plane. The cosine function is the projection of the vector onto the real axis. Its amplitude is the modulus of the vector, and its argument is the total phase . The phase constant represents the angle that the vector forms with the real axis at t = 0.

Phasor diagram of three waves in perfect destructive interference

The electrical resistance of an electrical conductor is the opposition to the passage of an electric current through that conductor; the inverse quantity iselectrical conductance, the ease at which an electric current passes. Electrical resistance shares some conceptual

parallels with the mechanical notion offriction. The SI unit of electrical resistance is the ohm (), while electrical conductance is measured in siemens (S). ohm (symbol: ) is the SI derived unit of electrical resistance, named after German physicist Georg Simon Ohm. Although several empirically derived standard units for expressing electrical resistance were developed in connection with early telegraphy practice, the British Association for the Advancement of Science proposed a unit derived from existing units of mass, length and time and of a convenient size for practical work as early as 1861. The definition of the "ohm" unit was revised several times. Today the value of the ohm is expressed in terms of the quantum Hall effect.

Definition[edit source | editbeta]


The ohm is defined as a resistance between two points of a conductor when a constant potential difference of 1 volt, applied to these points, produces in the conductor a current of 1 ampere, the conductor not being the seat of any electromotive force.[1]

where: V = volt A = ampere m = metre kg = kilogram s = second C = coulomb J = joule S = siemens F = farad In many cases the resistance of a conductor in ohms is approximately constant within a certain range of voltages, temperatures, and other parameters; one speaks of linear resistors. In other cases resistance varies (e.g., thermistors). Commonly used multiples and submultiples in electrical and electronic usage are the milliohm, kilohm, megohm, and gigaohm,[2] though the term 'gigohm', though not official, is in common use for the latter.[3] In alternating current circuits, electrical impedance is also measured in ohms.

Conversions[edit source | editbeta]

The siemens (symbol: S) is the SI derived unit of electric conductance and admittance, also known as the mho (ohm spelled backwards, symbol is ); it is the reciprocal of resistance in ohms.

Power as a function of resistance[edit


source | editbeta]
The power dissipated by a linear resistor may be calculated from its resistance, and voltage or current. The formula is a combination of Ohm's law and Joule's law:

where: P = power in watts R = resistance in ohms V = voltage across the resistor I = current through the resistor in amps

An object of uniform cross section has a resistance proportional to its resistivity and length and inversely proportional to its cross-sectional area. All materials show some resistance, except for superconductors, which have a resistance of zero. The resistance (R) of an object is defined as the ratio of voltage across it (V) to current through it (I), while the conductance (G) is the inverse:

For a wide variety of materials and conditions, V and I are directly proportional to each other, and therefore R and G are constant (although they can depend on other factors like temperature or strain). This proportionality is called Ohm's law, and materials that satisfy it are called "Ohmic" materials. In other cases, such as a diode or battery, V and I are not directly proportional, or in other words the IV curve is not a straight line through the origin, and Ohm's law does not hold. In this case, resistance and conductance are less useful concepts, and more difficult to define. The ratio V/I is sometimes still useful, and is referred to as a "chordal resistance" or "static resistance",[1][2] as it corresponds to the inverse slope of a chord between the origin and an IVcurve. In other situations, the derivative useful; this is called the "differential resistance". may be most

The hydraulic analogy compares electric current flowing through circuits to water flowing through pipes. When a pipe (left) is filled with hair (right), it takes a larger pressure to achieve the same flow of water. Pushing electric current through a large resistance is like pushing water through a pipe clogged with hair: It requires a larger push (electromotive force) to drive the same flow (electric current).

In the hydraulic analogy, current flowing through a wire (or resistor) is like water flowing through a pipe, and the voltage drop across the wire is like thepressure drop which pushes water through the pipe. Conductance is proportional to how much flow occurs for a given pressure, and resistance is proportional to how much pressure is required to achieve a given flow. (Conductance and resistance are reciprocals.) The voltage drop (i.e., difference in voltage between one side of the resistor and the other), not the voltage itself, provides the driving force pushing current through a resistor. In hydraulics, it is similar: The pressure difference between two sides of a pipe, not the pressure itself, determines the flow through it. For example, there may be a large water pressure above the pipe, which tries to push water down through the pipe. But there may be an equally large water pressure below the pipe, which tries to push water back up through the pipe. If these pressures are equal, no water will flow. (In the image at right, the water pressure below the pipe is zero.) The resistance and conductance of a wire, resistor, or other element is generally determined by two factors: geometry (shape) and materials. Geometry is important because it is more difficult to push water through a long, narrow pipe than a wide, short pipe. In the same way, a long, thin copper wire has higher resistance (lower conductance) than a short, thick copper wire. Materials are important as well. A pipe filled with hair restricts the flow of water more than a clean pipe of the same shape and size. In a similar way,electrons can flow freely and easily through a copper wire, but cannot as easily flow through a steel wire of the same shape and size, and they essentially cannot flow at all through an insulator like rubber, regardless of its shape. The difference between, copper, steel, and rubber is related to their microscopic structure and electron configuration, and is quantified by a property called resistivity.

Ohm's law[edit source | editbeta]

The current-voltage characteristics of four devices: Two resistors, a diode, and a battery. The horizontal axis is voltage drop, the vertical axis is current. Ohm's law is satisfied when the graph is a straight line through the origin. Therefore, the two resistors are "ohmic", but the diode and battery are not.

Main article: Ohm's law Ohm's law is an empirical law relating the voltage V across an element to the current I through it:

(V is directly proportional to I). This law is not always true: For example, it is false for diodes, batteries, etc. However, it is true to a very good approximation for wires and resistors (assuming that other conditions, including temperature, are held fixed). Materials or objects where Ohm's law is true are called "ohmic", whereas objects which do not obey Ohm's law are '"non-ohmic".

Relation to resistivity and conductivity[edit source | editbeta]

A piece of resistive material with electrical contacts on both ends.

Main article: Electrical resistivity and conductivity The resistance of a given object depends primarily on two factors: What material it is made of, and its shape. For a given material, the resistance is inversely proportional to the crosssectional area; for example, a thick copper wire has lower resistance than an otherwiseidentical thin copper wire. Also, for a given material, the resistance is proportional to the length; for example, a long copper wire has higher resistance than an otherwise-identical

short copper wire. The resistance R and conductance G of a conductor of uniform cross section, therefore, can be computed as

where is the length of the conductor, measured in metres [m], A is the crosssection area of the conductor measured in square metres [m], (sigma) is the electrical conductivity measured in siemens per meter (Sm1), and (rho) is the electrical resistivity (also called specific electrical resistance) of the material, measured in ohm-metres (m). The resistivity and conductivity are proportionality constants, and therefore depend only on the material the wire is made of, not the geometry of the wire. Resistivity and conductivity are reciprocals: Resistivity is a measure of the material's ability to oppose electric current. .

This formula is not exact: It assumes the current density is totally uniform in the conductor, which is not always true in practical situations. However, this formula still provides a good approximation for long thin conductors such as wires. Another situation for which this formula is not exact is with alternating current (AC), because the skin effect inhibits current flow near the center of the conductor. Then, the geometrical cross-section is different from the effective cross-section in which current is actually flowing, so the resistance is higher than expected. Similarly, if two conductors are near each other carrying AC current, their resistances will increase due to the proximity effect. At commercial power frequency, these effects are significant for large conductors carrying large currents, such as busbars in anelectrical substation,[3] or large power cables carrying more than a few hundred amperes.

What determines resistivity?[edit source | editbeta]


Main article: Electrical resistivity and conductivity The resistivity of different materials varies by an enormous amount: For example, the conductivity of teflon is about 1030 times lower than the conductivity of copper. Why is there such a difference? Loosely speaking, a metal has large numbers of "delocalized" electrons that are not stuck in any one place, but free to move across large distances, whereas in an insulator (like teflon), each electron is tightly bound to a single molecule, and a great force is required to pull it away. Semiconductors lie between these two extremes. More details can be found in the article:Electrical resistivity and conductivity. For the case of electrolyte solutions, see the article: Conductivity (electrolytic). Resistivity varies with temperature. In semiconductors, resistivity also changes when light is shining on it. These are discussed below.

Static and differential resistance[edit source | editbeta]

The IV curve of a non-ohmic device (purple). The static resistance at pointA is the inverse slope of line B through the origin. The differential resistanceat A is the inverse slope of tangent lineC.

The IV curve of a component withnegative differential resistance, an unusual phenomenon where the IV curve is non-monotonic.

See also: Small-signal model Many electrical elements, such as diodes and batteries do not satisfy Ohm's law. These are called non-ohmic ornonlinear, and are characterized by an IV curve which is not a straight line through the origin. Resistance and conductance can still be defined for non-ohmic elements. However, unlike ohmic resistance, nonlinear resistance is not constant but varies with the voltage or current through the device; its operating point. There are two types:[1][2]

Static resistance (also called chordal or DC resistance) - This corresponds to the usual definition of resistance; the voltage divided by the current . It is the slope of the line (chord} from the origin through the point on the curve. Static resistance determines the power dissipation in an electrical component. Points on the IV curve located in the 2nd or 4th quadrants, for which the slope of the chordal line is negative, have negative static resistance. Passive devices, which have no source of energy, cannot have negative static resistance. However active devices such as transistors or op-amps can synthesize negative static resistance with feedback, and it is used in some circuits such as gyrators.

Differential resistance (also called dynamic, incremental or small signal resistance) - Differential resistance is the derivative of the voltage with respect to the current; the slope of the IV curve at a point .

If the IV curve is nonmonotonic (with peaks and troughs), the curve will have a negative slope in some regions; so in these regions the device has negative differential resistance. Devices with negative differential resistance can amplify a signal applied to them, and are used to make amplifiers and oscillators. These include tunnel diodes, Gunn diodes, IMPATT diodes, magnetrontubes, and unijunction transistors.

AC circuits[edit source | editbeta]


Impedance and admittance[edit source | editbeta]

The voltage (red) and current (blue) versus time (horizontal axis) for a capacitor (top) and inductor (bottom). Since the amplitude of the current and voltage sinusoids are the same, the absolute value of impedance is 1 for both the capacitor and the inductor (in whatever units the graph is using). On the other hand, the phase difference between current and voltage is -90 for the capacitor; therefore, thecomplex phase of the impedance of the capacitor is -90. Similarly, the phase difference between current and voltage is +90 for the inductor; therefore, the complex phase of the impedance of the inductor is +90.

Main articles: Electrical impedance and Admittance When an alternating current flows through a circuit, the relation between current and voltage across a circuit element is characterized not only by the ratio of their magnitudes, but also the difference in their phases. For example, in an ideal resistor, the moment when the voltage reaches its maximum, the current also reaches its maximum (current and voltage are oscillating in phase). But for a capacitor orinductor, the maximum current flow occurs as the voltage passes through zero and vice-versa (current and voltage are oscillating 90 out of

phase, see image at right). Complex numbers are used to keep track of both the phase and magnitude of current and voltage:

where:
t is time, V(t) and I(t) are, respectively, voltage and current as a function of

time,
V0, I0, Z, and Y are complex numbers, Z is called impedance, Y is called admittance, Re indicates real part,

is the angular frequency of the AC current, is the imaginary unit.

The impedance and admittance may be expressed as complex numbers which can be broken into real and imaginary parts:

where R and G are resistance and conductance respectively, X is reactance, and B is susceptance. For ideal resistors, Z and Y reduce to R and G respectively, but for AC networks containing capacitors and inductors, X and B are nonzero. for AC circuits, just as for DC circuits.

Frequency dependence of resistance[edit source | editbeta]


Another complication of AC circuits is that the resistance and conductance can be frequency-dependent. One reason, mentioned above is the skin effect (and the related proximity effect). Another reason is that the resistivity itself may depend on frequency (see Drude model, deep-level traps, resonant frequency, KramersKronig relations, etc.)

Energy dissipation and Joule heating[edit source | editbeta]

Running current through a resistance creates heat, in a phenomenon called Joule heating. In this picture, a cartridge heater, warmed by Joule heating, is glowing red hot.

Main article: Joule heating Resistors (and other elements with resistance) oppose the flow of electric current; therefore, electrical energy is required to push current through the resistance. This electrical energy is dissipated, heating the resistor in the process. This is called Joule heating (after James Prescott Joule), also calledohmic heating or resistive heating. The dissipation of electrical energy is often undesired, particularly in the case of transmission losses in power lines. High voltage transmission helps reduce the losses by reducing the current for a given power. On the other hand, Joule heating is sometimes useful, for example in electric stoves and other electric heaters (also called resistive heaters). As another example, incandescent lamps rely on Joule heating: the filament is heated to such a high temperature that it glows "white hot" with thermal radiation (also called incandescence). The formula for Joule heating is:

where P is the power (energy per unit time) converted from electrical energy to thermal energy, R is the resistance, and I is the current through the resistor.

Dependence of resistance on other conditions[edit source | editbeta]


Temperature dependence[edit source | editbeta]
Main article: Electrical resistivity and conductivity#Temperature dependence Near room temperature, the resistivity of metals typically increases as temperature is increased, while the resistivity of semiconductors typically decreases as temperature is increased. The resistivity of insulators and electrolytes may increase or decrease depending on the system. For the detailed behavior and explanation, see Electrical resistivity and conductivity. As a consequence, the resistance of wires, resistors, and other components often change with temperature. This effect may be undesired, causing an electronic circuit to malfunction at extreme temperatures. In some cases, however, the effect is put to good use. When temperature-dependent resistance of a component is used purposefully, the component is called a resistance thermometer or thermistor. (A resistance thermometer is made of metal, usually platinum, while a thermistor is made of ceramic or polymer.) Resistance thermometers and thermistors are generally used in two ways. First, they can be used as thermometers: By measuring the resistance, the temperature of the environment can be inferred. Second, they can be used in conjunction with Joule heating (also called selfheating): If a large current is running through the resistor, the resistor's temperature rises and therefore its resistance changes. Therefore, these components can be used in a circuitprotection role similar to fuses, or for feedback in circuits, or for many other purposes. In

general, self-heating can turn a resistor into a nonlinear and hysteretic circuit element. For more details see Thermistor#Self-heating effects. If the temperature T does not vary too much, a linear approximation is typically used:

where is called the temperature coefficient of resistance, is a fixed reference temperature (usually room temperature), and is the resistance at temperature . The parameter is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the [9] reference. The temperature coefficient is typically +3103 K1 to +6103 K1 for metals near room temperature. It is usually negative for semiconductors and insulators, with highly variable magnitude.[10]

Electrical impedance
From Wikipedia, the free encyclopedia

A graphical representation of thecomplex impedance plane

Electrical impedance is the measure of the opposition that a circuit presents to a current when a voltage is applied. In quantitative terms, it is the complexratio of the voltage to the current in an alternating current (AC) circuit. Impedance extends the concept of resistance to AC circuits, and possesses both magnitude and phase, unlike resistance, which has only magnitude. When a circuit is driven with direct current (DC), there is no distinction between impedance and resistance; the latter can be thought of as impedance with zero phase angle. It is necessary to introduce the concept of impedance in AC circuits because there are two additional impeding mechanisms to be taken into account besides the normal resistance of DC circuits: the induction

of voltages in conductors self-induced by the magnetic fields of currents (inductance), and the electrostatic storage of charge induced by voltages between conductors (capacitance). The impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part. The symbol for impedance is usually Z and it may be represented by writing its magnitude and phase in the form |Z|. However, complex number representation is often more powerful for circuit analysis purposes. The term impedance was coined by Oliver Heaviside in July 1886.[1][2] Arthur Kennelly was the first to represent impedance with complex numbers in 1893.[3] Impedance is defined as the frequency domain ratio of the voltage to the current.[4] In other words, it is the voltagecurrent ratio for a single complex exponential at a particular frequency . In general, impedance will be a complex number, with the same units as resistance, for which the SI unit is the ohm(). For a sinusoidal current or voltage input, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular,

The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude.

The phase of the complex impedance is the phase shift by which the current lags the voltage.

The reciprocal of impedance is admittance (i.e., admittance is the current-to-voltage ratio, and it conventionally carries units of siemens, formerly calledmhos).

Complex impedance[edit source | editbeta]


Impedance is represented as a complex quantity and the term complex impedance may be used interchangeably; the polar form conveniently captures both magnitude and phase characteristics,

where the magnitude represents the ratio of the voltage difference amplitude to the current amplitude, while the argument gives the phase difference between voltage and current. is theimaginary unit, and is used instead of in this context to avoid confusion with the symbol for electric current. In Cartesian form,

where the real part of impedance is the resistance the reactance .

and the imaginary part is

Where it is required to add or subtract impedances the cartesian form is more convenient, but when quantities are multiplied or divided the calculation becomes simpler if the polar form is used. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers.

Ohm's law
The meaning of electrical impedance can be understood by substituting it into Ohm's law.[5][6]

The magnitude of the impedance acts just like resistance, giving the drop in voltage amplitude across an impedance for a given current . The phase factor tells us that the current lags the voltage by a phase of (i.e., in the time domain, the current signal is shifted later with respect to the voltage signal).

Just as impedance extends Ohm's law to cover AC circuits, other results from DC circuit analysis such as voltage division, current division, Thvenin's theorem, and Norton's theorem can also be extended to AC circuits by replacing resistance with impedance.

An AC supply applying a voltage

, across a load

, driving a current .

Complex voltage and current[edit source | editbeta]

Generalized impedances in a circuit can be drawn with the same symbol as a resistor (US ANSI or DIN Euro) or with a labeled box.

In order to simplify calculations, sinusoidal voltage and current waves are commonly represented as complex-valued functions of time denoted as and .[7][8]

Impedance is defined as the ratio of these quantities.

Substituting these into Ohm's law we have

Noting that this must hold for all , we may equate the magnitudes and phases to obtain

The magnitude equation is the familiar Ohm's law applied to the voltage and current amplitudes, while the second equation defines the phase relationship.

Validity of complex representation[edit source | editbeta]


This representation using complex exponentials may be justified by noting that (by Euler's formula):

The real-valued sinusoidal function representing either voltage or current may be broken into two complex-valued functions. By the principle of superposition, we may analyse the behaviour of the sinusoid on the left-hand side by analysing the behaviour of the two complex terms on the right-hand side. Given the symmetry, we only need to perform the analysis for one right-hand term; the results will be identical for the other. At the end of any calculation, we may return to real-valued sinusoids by further noting that

Device examples[edit source | editbeta]

The phase angles in the equations for the impedance of inductors and capacitors indicate that the voltage across a capacitor lags the current through it by a phase of through it by equal to one. , while the voltage across an inductor leads the current

. The identical voltage and current amplitudes indicate that the magnitude of the impedance is

The impedance of an ideal resistor is purely real and is referred to as a resistive impedance:

In this case, the voltage and current waveforms are proportional and in phase. Ideal inductors and capacitors have a purely imaginary reactive impedance: the impedance of inductors increases as frequency increases;

the impedance of capacitors decreases as frequency increases;

In both cases, for an applied sinusoidal voltage, the resulting current is also sinusoidal, but in quadrature, 90 degrees out of phase with the voltage. However, the phases have opposite signs: in an inductor, the current is lagging; in a capacitor the current is leading. Note the following identities for the imaginary unit and its reciprocal:

Thus the inductor and capacitor impedance equations can be rewritten in polar form:

The magnitude gives the change in voltage amplitude for a given current amplitude through the impedance, while the exponential factors give the phase relationship.

Deriving the device-specific impedances[edit


source | editbeta]
What follows below is a derivation of impedance for each of the three basic circuit elements: the resistor, the capacitor, and the inductor. Although the idea can be extended to define the relationship between the voltage and current of any arbitrary signal, these derivations will assume sinusoidal signals, since any arbitrary signal can be approximated as a sum of sinusoids throughFourier analysis.

Resistor[edit source | editbeta]


For a resistor, there is the relation:

This is Ohm's law. Considering the voltage signal to be

it follows that

This says that the ratio of AC voltage amplitude to alternating current (AC) amplitude across a resistor is , and that the AC voltage leads the current across a resistor by 0 degrees. This result is commonly expressed as

Capacitor
For a capacitor, there is the relation:

Considering the voltage signal to be

it follows that

And thus

This says that the ratio of AC voltage amplitude to AC current amplitude across a capacitor is , and that the AC voltage lags the AC current across a capacitor by 90 degrees (or the AC current leads the AC voltage across a capacitor by 90 degrees). This result is commonly expressed in polar form, as

or, by applying Euler's formula, as

Inductor
For the inductor, we have the relation:

This time, considering the current signal to be

it follows that

And thus

This says that the ratio of AC voltage amplitude to AC current amplitude across an inductor is , and that the AC voltage leads the AC current across an inductor by 90 degrees. This result is commonly expressed in polar form, as

or, using Euler's formula, as

Generalised s-plane impedance[edit source | editbeta]


Impedance defined in terms of j can strictly only be applied to circuits which are energised with a steady-state AC signal. The concept of impedance can be extended to a circuit energised with any arbitrary signal by using complex frequency instead of j. Complex frequency is given the symbol s and is, in general, a complex number. Signals are expressed in terms of complex frequency by taking the Laplace transform of the time domain expression of the signal. The impedance of the basic circuit elements in this more general notation is as follows: Element Impedance expression

Resistor

Inductor

Capacitor

For a DC circuit this simplifies to s = 0. For a steady-state sinusoidal AC signal s = j.

Resistance vs reactance[edit source | editbeta]


Resistance and reactance together determine the magnitude and phase of the impedance through the following relations:

In many applications the relative phase of the voltage and current is not critical so only the magnitude of the impedance is significant.

Resistance[edit source | editbeta]


Main article: Electrical resistance Resistance is the real part of impedance; a device with a purely resistive impedance exhibits no phase shift between the voltage and current.

Reactance[edit source | editbeta]


Main article: Electrical reactance Reactance is the imaginary part of the impedance; a component with a finite reactance induces a phase shift between the voltage across it and the current through it.

A purely reactive component is distinguished by the sinusoidal voltage across the component being in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. A pure reactance will not dissipate any power.

Capacitive reactance[edit source | editbeta]


Main article: Capacitance A capacitor has a purely reactive impedance which is inversely proportional to the signal frequency. A capacitor consists of two conductors separated by an insulator, also known as a dielectric.

At low frequencies a capacitor is open circuit, as no charge flows in the dielectric. A DC voltage applied across a capacitor causes charge to accumulate on one side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero. Driven by an AC supply, a capacitor will only accumulate a limited amount of charge before the potential difference changes sign and the charge dissipates. The higher the frequency, the less charge will accumulate and the smaller the opposition to the current.

Inductive reactance[edit source | editbeta]


Main article: Inductance Inductive reactance is proportional to the signal frequency and the inductance .

An inductor consists of a coiled conductor. Faraday's law of electromagnetic induction gives the back emf (voltage opposing current) due to a rate-of-change of magnetic flux density through a current loop.

For an inductor consisting of a coil with

loops this gives.

The back-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency.

Total reactance[edit source | editbeta]


The total reactance is given by

so that the total impedance is

Series combination[edit source | editbeta]


For components connected in series, the current through each circuit element is the same; the total impedance is the sum of the component impedances.

Or explicitly in real and imaginary terms:

Parallel combination
For components connected in parallel, the voltage across each circuit element is the same; the ratio of currents through any two elements is the inverse ratio of their impedances.

Hence the inverse total impedance is the sum of the inverses of the component impedances:

or, when n = 2:

The equivalent impedance can be calculated in terms of the equivalent series resistance and reactance .[9]

Admittance
From Wikipedia, the free encyclopedia
This article is about electrical engineering. For other uses, see Admittance (disambiguation). In electrical engineering, admittance is a measure of how easily a circuit or device will allow a current to flow. It is defined as the inverse of impedance. The SI unit of admittance is the siemens(symbol S). Oliver Heaviside coined the term admittance in December 1887.[1] Admittance is defined as

where Y is the admittance, measured in siemens

Z is the impedance, measured in ohms The synonymous unit mho, and the symbol (an upside-down uppercase omega ), are also in common use. Resistance is a measure of the opposition of a circuit to the flow of a steady current, while impedance takes into account not only the resistance but also dynamic effects (known as reactance). Likewise, admittance is not only a measure of the ease with which a steady current can flow, but also the dynamic effects of the material's susceptance to polarization:

where

is the admittance, measured in siemens. is the conductance, measured in siemens. is the susceptance, measured in siemens.

Conversion from impedance to admittance[edit


source | editbeta]
Parts of this article or section rely on the reader's knowledge of the complex impedance representation of capacitors and inductors and on knowledge of the frequency domain representation of signals. The impedance, Z, is composed of real and imaginary parts,

where

R is the resistance, measured in ohms X is the reactance, measured in ohms

Admittance, just like impedance, is a complex number, made up of a real part (the conductance, G), and an imaginary part (the susceptance, B), thus:

where G (conductance) and B (susceptance) are given by:

The magnitude and phase of the admittance are given by:

where

G is the conductance, measured in siemens B is the susceptance, also measured in siemens

Note that (as shown above) the signs of reactances become reversed in the admittance domain; i.e. capacitive susceptance is positive and inductive suceptance is negative.

Inductance
From Wikipedia, the free encyclopedia

henry (symbol H) is the SI derived unit of inductance.[1] It is named after Joseph Henry (17971878), the American scientist who discovered electromagnetic induction independently of and at about the same time as Michael Faraday (17911867) in England.[2] The magnetic permeability of a vacuum is 4107 H/m (henry per meter). The National Institute of Standards and Technology provides guidance for American users of SI to write the plural as henries.[3]:31

Definition[edit source | editbeta]


If the rate of change of current in a circuit is one ampere per second and the resulting electromotive force is one volt, then the inductance of the circuit is one henry. Other equivalent combinations of SI units are as follows:[4]

where A = ampere, C = coulomb, F = farad, J = joule, kg = kilogram,

m = meter, s = second, Wb = weber, V = volt, = ohm.

In electromagnetism and electronics, inductance is the property of a conductor by which a change in current in the conductor "induces" (creates) a voltage(electromotive force) in both the conductor itself (self-inductance)[1][2][3] and in any nearby conductors (mutual inductance).[1][3] This effect derives from two fundamental observations of physics: First, that a steady current creates a steady magnetic field (Oersted's law)[4] and second, that a time-varying magnetic field induces a voltage in a nearby conductor (Faraday's law of induction).[5] From Lenz's law,[6] in an electric circuit, a changing electric current through a circuit that has inductance induces a proportional voltage which opposes the change in current (self-inductance). The varying field in this circuit may also induce an e.m.f. in a neighbouring circuit (mutual inductance). The term 'inductance' was coined by Oliver Heaviside in February 1886.[7] It is customary to use the symbol L for inductance, in honour of the physicistHeinrich Lenz.[8][9] In the SI system the unit of inductance is the henry, named in honor of the scientist who discovered inductance, Joseph Henry. To add inductance to a circuit, electrical or electronic components called inductors are used, typically consisting of coils of wire to concentrate the magnetic field and so that the magnetic field is linked into the circuit more than once. The relationship between the self-inductance L of an electrical circuit in henries, voltage, and current is

where v denotes the voltage in volts and i the current in amperes. The voltage across an inductor is equal to the product of its inductance and the time rate of change of the current through it. All practical circuits have some inductance, which may provide either beneficial or detrimental effects. In a tuned circuit inductance is used to provide a frequency selective circuit. Practical inductors may be used to provide filtering or energy storage in a system. The inductance of a transmission line is one of the properties that determines its characteristic impedance; balancing the inductance and capacitance of cables is important for distortion-free telegraphy and telephony. The inductance of long power transmission lines limits the AC power that can be sent over them. Sensitive circuits such as microphone and computer network cables may use special cable constructions to limit the mutual inductance between signal circuits.

In circuit analysis[edit source | editbeta]


The generalization to the case of K electrical circuits with currents im and voltages vm reads

Inductance here is a symmetric matrix. The diagonal coefficients Lm,m are called coefficients of self-inductance, the off-diagonal elements are called coefficients of mutual inductance. The coefficients of inductance are constant as long as no magnetizable material with nonlinear characteristics is involved. This is a direct consequence of the linearity of Maxwell's equations in the fields and the current density. The coefficients of inductance become functions of the currents in the nonlinear case, see nonlinear inductance.

Derivation from Faraday's law of inductance[edit source | editbeta]


The inductance equations above are a consequence of Maxwell's equations. There is a straightforward derivation in the important case of electrical circuits consisting of thin wires. Consider a system of K wire loops, each with one or several wire turns. The flux linkage of loop m is given by

Here Nm denotes the number of turns in loop m, m the magnetic flux through this loop, and Lm,n are some constants. This equation follows from Ampere's law magnetic fields and fluxes are linear functions of the currents. By Faraday's law of induction we have

where vm denotes the voltage induced in circuit m. This agrees with the definition of inductance above if the coefficients Lm,n are identified with the coefficients of inductance. Because the total currents Nnin contribute to m it also follows that Lm,n is proportional to the product of turns NmNn.

Inductance and magnetic field energy[edit source | editbeta]


Multiplying the equation for vm above with imdt and summing over m gives the energy transferred to the system in the time interval dt,

This must agree with the change of the magnetic field energy W caused by the currents.[10] The integrability condition

requires Lm,n=Ln,m. The inductance matrix Lm,n thus is symmetric. The integral of the energy transfer is the magnetic field energy as a function of the currents,

This equation also is a direct consequence of the linearity of Maxwell's equations. It is helpful to associate changing electric currents with a build-up or decrease of magnet field energy. The corresponding energy transfer requires or generates a voltage. A mechanical analogy in the K=1 case with magnetic field energy (1/2)Li2 is a body with mass M, velocity u and kinetic energy (1/2)Mu2. The rate of change of velocity (current) multiplied with mass (inductance) requires or generates a force (an electrical voltage).

Coupled inductors
Further information: Coupling (electronics)

The circuit diagram representation of mutually coupled inductors. The two vertical lines between the inductors indicate a solid core that the wires of the inductor are wrapped around. "n:m" shows the ratio between the number of windings of the left inductor to windings of the right inductor. This picture also shows the dot convention.

Mutual inductance occurs when the change in current in one inductor induces a voltage in another nearby inductor. It is important as the mechanism by which transformers work, but it can also cause unwanted coupling between conductors in a circuit. The mutual inductance, M, is also a measure of the coupling between two inductors. The mutual inductance by circuit i on circuit j is given by the double integral Neumann formula, see calculation techniques The mutual inductance also has the relationship:

where is the mutual inductance, and the subscript specifies the relationship of the voltage induced in coil 2 due to the current in coil 1. N1 is the number of turns in coil 1, N2 is the number of turns in coil 2, P21 is the permeance of the space occupied by the flux. The mutual inductance also has a relationship with the coupling coefficient. The coupling coefficient is always between 1 and 0, and is a convenient way to specify the relationship between a certain orientation of inductors with arbitrary inductance:

where k is the coupling coefficient and 0 k 1, L1 is the inductance of the first coil, and L2 is the inductance of the second coil. Once the mutual inductance, M, is determined from this factor, it can be used to predict the behavior of a circuit:

where V1 is the voltage across the inductor of interest, L1 is the inductance of the inductor of interest, dI1/dt is the derivative, with respect to time, of the current through the inductor of interest, dI2/dt is the derivative, with respect to time, of the current through the inductor that is coupled to the first inductor, and M is the mutual inductance. The minus sign arises because of the sense the current I2 has been defined in the diagram. With both currents defined going into the dots the sign of M will be positive.[11] When one inductor is closely coupled to another inductor through mutual inductance, such as in a transformer, the voltages, currents, and number of turns can be related in the following way:

where Vs is the voltage across the secondary inductor, Vp is the voltage across the primary inductor (the one connected to a power source), Ns is the number of turns in the secondary inductor, and Np is the number of turns in the primary inductor. Conversely the current:

where Is is the current through the secondary inductor, Ip is the current through the primary inductor (the one connected to a power source), Ns is the number of turns in the secondary inductor, and Np is the number of turns in the primary inductor. Note that the power through one inductor is the same as the power through the other. Also note that these equations don't work if both transformers are forced (with power sources). When either side of the transformer is a tuned circuit, the amount of mutual inductance between the two windings determines the shape of the frequency response curve. Although no boundaries are defined, this is often referred to as loose-, critical-, and over-coupling. When two tuned circuits are loosely coupled through mutual inductance, the bandwidth will be narrow. As the amount of mutual inductance increases, the bandwidth continues to grow. When the mutual inductance is increased beyond a critical point, the peak in the response curve begins to drop, and the center frequency will be attenuated more strongly than its direct sidebands. This is known as overcoupling.

Calculation techniques
In the most general case, inductance can be calculated from Maxwell's equations. Many important cases can be solved using simplifications. Where high frequency currents are considered, withskin effect, the surface current densities and magnetic field may be obtained by solving the Laplace equation. Where the conductors are thin wires, self-inductance still depends on the wire radius and the distribution of the current in the wire. This current distribution is approximately constant (on the surface or in the volume of the wire) for a wire radius much smaller than other length scales.

Mutual inductance of two wire loops[edit source | editbeta]


The mutual inductance by a filamentary circuit i on a filamentary circuit j is given by the double integral Neumann formula[12]

The symbol 0 denotes the magnetic constant (4 107 H/m), Ci and Cj are the curves spanned by the wires, Rij is the distance between two points. See a derivation of this equation.

Self-inductance of a wire loop[edit source | editbeta]


Formally the self-inductance of a wire loop would be given by the above equation with i = j. The problem, however, is that 1/R now becomes infinite, making it necessary to take the finite wire radius a and the distribution of the current in the wire into account. There remain the contribution from the integral over all points with |R| > a/2 and a correction term,[13]

Here a and l denote radius and length of the wire, and Y is a constant that depends on the distribution of the current in the wire: Y = 0 when the current flows in the surface of the wire (skin effect),Y = 1/4 when the current is homogeneous across the wire. This approximation is accurate when the wires are long compared to their cross-sectional dimensions.

Method of images[edit source | editbeta]


In some cases different current distributions generate the same magnetic field in some section of space. This fact may be used to relate self inductances (method of images). As an example consider the two systems:

A wire at distance d/2 in front of a perfectly conducting wall (which is the return) Two parallel wires at distance d, with opposite current

The magnetic field of the two systems coincides (in a half space). The magnetic field energy and the inductance of the second system thus are twice as large as that of the first system.

Relation between inductance and capacitance[edit


source | editbeta]
Inductance per length L' and capacitance per length C' are related to each other in the special case of transmission lines consisting of two parallel perfect conductors of arbitrary but constant cross section,[14]

Here and denote dielectric constant and magnetic permeability of the medium the conductors are embedded in. There is no electric and no magnetic field inside the conductors (complete skin effect, high frequency). Current flows down on one line and returns on the other. Signals will propagate along the

transmission line at the speed of electromagnetic radiation in the non-conductive medium enveloping the conductors.

Inductance with physical symmetry[edit source | editbeta]


Inductance of a solenoid[edit source | editbeta]
A solenoid is a long, thin coil, i.e. a coil whose length is much greater than the diameter. Under these conditions, and without any magnetic material used, the magnetic flux density within the coil is practically constant and is given by

where is the magnetic constant, the number of turns, the current and the length of the coil. Ignoring end effects the total magnetic flux through the coil is obtained by multiplying the flux density by the cross-section area and the number of turns :

When this is combined with the definition of inductance,

it follows that the inductance of a solenoid is given by:

A table of inductance for short solenoids of various diameter to length ratios has been calculated by Dellinger, Whittmore, and Ould[18] This, and the inductance of more complicated shapes, can be derived from Maxwell's equations. For rigid air-core coils, inductance is a function of coil geometry and number of turns, and is independent of current. Similar analysis applies to a solenoid with a magnetic core, but only if the length of the coil is much greater than the product of the relative permeability of the magnetic core and the diameter. That limits the simple analysis to lowpermeability cores, or extremely long thin solenoids. Although rarely useful, the equations are,

where

the relative permeability of the material within the solenoid,

from which it follows that the inductance of a solenoid is given by:

where N is squared because of the definition of inductance.

Note that since the permeability of ferromagnetic materials changes with applied magnetic flux, the inductance of a coil with a ferromagnetic core will generally vary with current.

Inductance of a coaxial line


Let the inner conductor have radius and permeability , let the dielectric between the inner and outer conductor have permeability , and let the outer conductor have inner radius , outer radius , and permeability . Assume that a DC current flows in opposite directions in the two conductors, with uniform current density. The magnetic field generated by these currents points in the azimuthal direction and is a function of radius ; it can be computed using Ampre's law:

The flux per length in the region between the conductors can be computed by drawing a surface containing the axis:

Inside the conductors, L can be computed by equating the energy stored in an inductor, , with the energy stored in the magnetic field:

For a cylindrical geometry with no dependence, the energy per unit length is

where

is the inductance per unit length. For the inner conductor, ; for the outer conductor it

the integral on the right-hand-side is is

Solving for and summing the terms for each region together gives a total inductance per unit length of:

However, for a typical coaxial line application we are interested in passing (non-DC) signals at frequencies for which the resistive skin effect cannot be neglected. In most cases, the inner and outer conductor terms are negligible, in which case one may approximate

Phasor circuit analysis and impedance


Using phasors, the equivalent impedance of an inductance is given by:

where j is the imaginary unit, L is the inductance, = 2f is the angular frequency, f is the frequency and L = XL is the inductive reactance.

Nonlinear inductance[edit source | editbeta]


Many inductors make use of magnetic materials. These materials over a large enough range exhibit a nonlinear permeability with such effects as saturation. This in-turn makes the resulting inductance a function of the applied current. Faraday's Law still holds but inductance is ambiguous and is different whether you are calculating circuit parameters or magnetic fluxes. The secant or large-signal inductance is used in flux calculations. It is defined as:

The differential or small-signal inductance, on the other hand, is used in calculating voltage. It is defined as:

The circuit voltage for a nonlinear inductor is obtained via the differential inductance as shown by Faraday's Law and the chain rule of calculus.

There are similar definitions for nonlinear mutual inductances.

Capacitance
From Wikipedia, the free encyclopedia
Capacitance is the ability of a body to store an electrical charge. Any object that can be electrically charged exhibits capacitance. A common form of energy storage device is a parallel-plate capacitor. In a parallel plate capacitor, capacitance is directly proportional to the surface area of the conductor plates and inversely proportional to the separation distance between the plates. If the charges on the plates are +q and q, and Vgives the voltage between the plates, then the capacitance C is given by

which gives the voltage/current relationship

The capacitance is a function only of the physical dimensions (geometry) of the conductors and the permittivity of the dielectric. It is independent of the potential difference between the conductors and the total charge on them. The SI unit of capacitance is the farad (symbol: F), named after the English physicist Michael Faraday; a 1 farad capacitor when charged with 1 coulomb of electrical charge will have a potential difference of 1 volt between its plates.[1] Historically, a farad was regarded as an inconveniently large unit, both electrically and physically. Its subdivisions were invariably used, namely the microfarad, nanofarad and picofarad. More recently, technology has advanced such that capacitors of 1 farad and greater can be constructed in a structure little larger than a coin battery (so-called 'supercapacitors'). Such capacitors are principally used for energy storage replacing more traditional batteries. The energy (measured in joules) stored in a capacitor is equal to the work done to charge it. Consider a capacitor of capacitance C, holding a charge +q on one plate and q on the other. Moving a small element of charge dq from one plate to the other against the potential difference V = q/C requires the work dW:

where W is the work measured in joules, q is the charge measured in coulombs and C is the capacitance, measured in farads.

The energy stored in a capacitor is found by integrating this equation. Starting with an uncharged capacitance (q = 0) and moving charge from one plate to the other until the plates have charge +Q and Q requires the work W:

Capacitors[edit source | editbeta]


Main article: Capacitor The farad (symbol: F) is the SI derived unit of electrical capacitance. It is named after the English physicist Michael Faraday.

Definition[edit source | editbeta]


One farad is the value of capacitance that produces a potential of one volt when it has been charged by one coulomb. A coulomb is equal to the amount of charge (electrons) produced by a current of one ampere flowing for one second. For example, the voltage across the two terminals of a 47 nF capacitor will increase linearly by 1 V when a current of 47 nA flows through it for 1 s. For most applications, the farad is an impractically large unit of capacitance, although capacitors measured in farads are now used, especially for backing up memory. The most commonly used SI prefixes for electrical and electronic applications are:

1 millifarad (mF) = one thousandth (103) of a farad or 1000 F 1 microfarad (F, or MFD in industrial use) = one millionth (106) of a farad, or 1000000 pF, or 1000 nF 1 nanofarad (nF) = one billionth (109) of a farad, or 1000 pF 1 picofarad (pF) = one trillionth (1012) of a farad

Equalities[edit source | editbeta]


A farad has the base SI representation of: s4 A2 m2 kg1 It can further be expressed as:

where A=ampere, V=volt, C=coulomb, J=joule, m=metre, N=newton, s=second, W=watt, kg=kilogram, =ohm.

The capacitance of the majority of capacitors used in electronic circuits is generally several orders of magnitude smaller than the farad. The most common subunits of capacitance in use today are the microfarad (F), nanofarad (nF), picofarad (pF), and, in microcircuits, femtofarad (fF). However, specially made supercapacitors can be much larger (as much as hundreds of farads), and parasitic capacitive elements can be less than a femtofarad. Capacitance can be calculated if the geometry of the conductors and the dielectric properties of the insulator between the conductors are known. For example, the capacitance of a parallel-platecapacitor constructed of two parallel plates both of area A separated by a distance d is approximately equal to the following:

where C is the capacitance; A is the area of overlap of the two plates; r is the relative static permittivity (sometimes called the dielectric constant) of the material between the plates (for a vacuum, r = 1); 0 is the electric constant (0 8.8541012 F m1); and d is the separation between the plates. Capacitance is proportional to the area of overlap and inversely proportional to the separation between conducting sheets. The closer the sheets are to each other, the greater the capacitance. The equation is a good approximation if d is small compared to the other dimensions of the plates so the field in the capacitor over most of its area is uniform, and the so-called fringing field around the periphery provides a small contribution. In CGS units the equation has the form:[2]

where C in this case has the units of length. Combining the SI equation for capacitance with the above equation for the energy stored in a capacitance, for a flat-plate capacitor the energy stored is:

where W is the energy, in joules; C is the capacitance, in farads; and V is the voltage, in volts.

Voltage-dependent capacitors[edit source | editbeta]


The dielectric constant for a number of very useful dielectrics changes as a function of the applied electrical field, for example ferroelectric materials, so the capacitance for these devices is more complex. For example, in charging such a capacitor the differential increase in voltage with charge is governed by:

where the voltage dependence of capacitance, C(V), stems from the field, which in a large area parallel plate device is given by = V/d. This field polarizes the dielectric, which polarization, in the case of a ferroelectric, is a nonlinear S-shaped function of field, which, in the case of a large area parallel plate device, translates into a capacitance that is a nonlinear function of the voltage causing the field.[3][4] Corresponding to the voltage-dependent capacitance, to charge the capacitor to voltage V an integral relation is found:

which agrees with Q = CV only when C is voltage independent. By the same token, the energy stored in the capacitor now is given by

[citation needed]

Integrating:

[citation needed]

where interchange of the order of integration is used. The nonlinear capacitance of a microscope probe scanned along a ferroelectric surface is used to study the domain structure of ferroelectric materials.[5] Another example of voltage dependent capacitance occurs in semiconductor devices such as semiconductor diodes, where the voltage dependence stems not from a change in dielectric constant but in a voltage dependence of the spacing between the charges on the two sides of the capacitor.[6] This effect is intentionally exploited in diode-like devices known as varicaps.

Frequency-dependent capacitors
If a capacitor is driven with a time-varying voltage that changes rapidly enough, then the polarization of the dielectric cannot follow the signal. As an example of the origin of this mechanism, the internal microscopic dipoles contributing to the dielectric constant cannot move instantly, and so as frequency of an applied alternating voltage increases, the dipole response is limited and the dielectric constant diminishes. A changing dielectric constant

with frequency is referred to as dielectric dispersion, and is governed by dielectric relaxation processes, such as Debye relaxation. Under transient conditions, the displacement field can be expressed as (see electric susceptibility):

indicating the lag in response by the time dependence of r, calculated in principle from an underlying microscopic analysis, for example, of the dipole behavior in the dielectric. See, for example,linear response function.[7][8] The integral extends over the entire past history up to the present time. A Fourier transform in time then results in:

where r() is now a complex function, with an imaginary part related to absorption of energy from the field by the medium. See permittivity. The capacitance, being proportional to the dielectric constant, also exhibits this frequency behavior. Fourier transforming Gauss's law with this form for displacement field:

where j is the imaginary unit, V() is the voltage component at angular frequency , G() is the real part of the current, called the conductance, and C() determines the imaginary part of the current and is the capacitance. Z() is the complex impedance. When a parallel-plate capacitor is filled with a dielectric, the measurement of dielectric properties of the medium is based upon the relation:

where a single prime denotes the real part and a double prime the imaginary part, Z() is the complex impedance with the dielectric present, C() is the so-called complex capacitance with the dielectric present, and C0 is the capacitance without the dielectric.[9][10] (Measurement "without the dielectric" in principle means measurement in free space, an unattainable goal inasmuch as even the quantum vacuum is predicted to exhibit nonideal behavior, such as dichroism. For practical purposes, when measurement errors are taken into account, often a measurement in terrestrial vacuum, or simply a calculation of C0, is sufficiently accurate.[11]) Using this measurement method, the dielectric constant may exhibit a resonance at certain frequencies corresponding to characteristic response frequencies (excitation energies) of contributors to the dielectric constant.

These resonances are the basis for a number of experimental techniques for detecting defects. The conductance method measures absorption as a function of frequency.[12] Alternatively, the time response of the capacitance can be used directly, as in deep-level transient spectroscopy.[13] Another example of frequency dependent capacitance occurs with MOS capacitors, where the slow generation of minority carriers means that at high frequencies the capacitance measures only the majority carrier response, while at low frequencies both types of carrier respond.[14][15] At optical frequencies, in semiconductors the dielectric constant exhibits structure related to the band structure of the solid. Sophisticated modulation spectroscopy measurement methods based upon modulating the crystal structure by pressure or by other stresses and observing the related changes in absorption or reflection of light have advanced our knowledge of these materials.[16]

Capacitance matrix
The discussion above is limited to the case of two conducting plates, although of arbitrary size and shape. The definition C=Q/V still holds for a single plate given a charge, in which case the field lines produced by that charge terminate as if the plate were at the center of an oppositely charged sphere at infinity. does not apply when there are more than two charged plates, or when the net charge on the two plates is non-zero. To handle this case, Maxwell introduced his coefficients of potential. If three plates are given charges , then the voltage of plate 1 is given by

and similarly for the other voltages. Hermann von Helmholtz and Sir William Thomson showed that the coefficients of potential are symmetric, so that etc. Thus the system can be described by a collection of coefficients known as the elastance matrix or reciprocal capacitance matrix, which is defined as:

From this, the mutual capacitance

between two objects can be defined[17] by .

solving for the total charge Q and using

Since no actual device holds perfectly equal and opposite charges on each of the two "plates", it is the mutual capacitance that is reported on capacitors.

The collection of coefficients is known as the capacitance matrix,[18][19] and is the inverse of the elastance matrix.

Self-capacitance[edit source | editbeta]


In electrical circuits, the term capacitance is usually a shorthand for the mutual capacitance between two adjacent conductors, such as the two plates of a capacitor. However, for an isolated conductor there also exists a property called self-capacitance, which is the amount of electrical charge that must be added to an isolated conductor to raise its electrical potential by one unit (i.e. one volt, in most measurement systems).[20] The reference point for this potential is a theoretical hollow conducting sphere, of infinite radius, centered on the conductor. Using this method, the self-capacitance of a conducting sphere of radius R is given by:[21]

Example values of self-capacitance are:


for the top "plate" of a van de Graaff generator, typically a sphere 20 cm in radius: 22.24 pF the planet Earth: about 710 F[22]

The capacitative component of a coil, which reduces its impedance at high frequencies and can lead to resonance and self-oscillation, is also called self-capacitance[23] as well as stray orparasitic capacitance.

Elastance[edit source | editbeta]


The reciprocal of capacitance is called elastance. The unit of elastance is the daraf, but is not recognised by SI.

Stray capacitance[edit source | editbeta]


Main article: Parasitic capacitance Any two adjacent conductors can be considered a capacitor, although the capacitance will be small unless the conductors are close together for long distances or over a large area. This (often unwanted) effect is termed "stray capacitance". Stray capacitance can allow signals to leak between otherwise isolated circuits (an effect called crosstalk), and it can be a limiting factor for proper functioning of circuits at high frequency. Stray capacitance is often encountered in amplifier circuits in the form of "feedthrough" capacitance that interconnects the input and output nodes (both defined relative to a common ground). It is often convenient for analytical purposes to replace this capacitance with a combination of one

input-to-ground capacitance and one output-to-ground capacitance; the original configuration including the input-to-output capacitance is often referred to as a pi-configuration. Miller's theorem can be used to effect this replacement: it states that, if the gain ratio of two nodes is 1/K, then an impedance of Z connecting the two nodes can be replaced with a Z/(1 k) impedance between the first node and ground and a KZ/(K 1) impedance between the second node and ground. Since impedance varies inversely with capacitance, the internode capacitance, C, will be seen to have been replaced by a capacitance of KC from input to ground and a capacitance of (K 1)C/K from output to ground. When the input-to-output gain is very large, the equivalent input-to-ground impedance is very small while the output-to-ground impedance is essentially equal to the original (input-to-output) impedance.

Capacitance of simple systems


Calculating the capacitance of a system amounts to solving the Laplace equation 2 = 0 with a constant potential on the surface of the conductors. This is trivial in cases with high symmetry. There is no solution in terms of elementary functions in more complicated cases.

Capacitance of simple systems

Capacitance of simple systems

Type

Capacitance

Comment

Parallelplate capacitor

: Permittivity

Coaxial cable : Permittivity Pair of parallel wires[24] Wire parallel to wall[24] a: Wire radius d: Distance, d > a l: Wire length d: Distance w1, w2: Strip width km: d/(2wm+d) k2: k1k2 K: Elliptic integral l: Length

Two parallel coplanar strips[25]

Concentric spheres : Permittivity Two spheres, equal radius[26][27] a: Radius d: Distance, d > 2a D = d/2a : Euler's constant

Sphere in front of wall[26] Sphere Circular disc[28] Thin straight wire, finite length[29][30][31] For quasi-two-dimensional situations analytic functions may be used to map different geometries to each other. See also SchwarzChristoffel mapping.

a: Radius d: Distance, d > a D = d/a a: Radius a: Radius a: Wire radius l: Length : ln(l/a)

Real, reactive, and apparent power[edit source | editbeta]


In a simple alternating current (AC) circuit consisting of a source and a linear load, both the current and voltage are sinusoidal. If the load is purely resistive, the two quantities reverse their polarity at the same time. At every instant the product of voltage and current is positive, indicating that the direction of energy flow does not reverse. In this case, only real power is transferred. If the loads are purely reactive, then the voltage and current are 90 degrees out of phase. For half of each cycle, the product of voltage and current is positive, but on the other half of the cycle, the product is negative, indicating that on average, exactly as much energy flows toward the load as flows back. There is no net energy flow over one cycle. In this case, only reactive energy flowsthere is no net transfer of energy to the load. Practical loads have resistance, inductance, and capacitance, so both real and reactive power will flow to real loads. Power engineers measure apparent power as the magnitude of the vector sum of real and reactive power. Apparent power is the product of the root-meansquare of voltage and current. Engineers care about apparent power, because even though the current associated with reactive power does no work at the load, it heats the wires, wasting energy. Conductors, transformers and generators must be sized to carry the total current, not just the current that does useful work.

Another consequence is that adding the apparent power for two loads will not accurately give the total apparent power unless they have the same displacement between current and voltage (the same power factor). Conventionally, capacitors are considered to generate reactive power and inductors to consume it. If a capacitor and an inductor are placed in parallel, then the currents flowing through the inductor and the capacitor tend to cancel rather than add. This is the fundamental mechanism for controlling the power factor in electric power transmission; capacitors (or inductors) are inserted in a circuit to partially cancel reactive power 'consumed' by the load.

The complex power is the vector sum of real and reactive power. The apparent power is the magnitude of the complex power. Real power (P) Reactive power (Q) Complex power (S) Apparent Power (|S|) Phase of Current ()

Engineers use the following terms to describe energy flow in a system (and assign each of them a different unit to differentiate between them):

Real power (P) or active power:[1] watt [W] Reactive power (Q): volt-ampere reactive [VAr] Complex power (S): volt-ampere [VA] Apparent Power (|S|), that is, the magnitude of complex power S: volt-ampere [VA] Phase of Voltage Relative to Current (), the angle of difference (in degrees) between voltage and current; Current lagging Voltage (Quadrant I Vector), Current leading voltage (Quadrant IV Vector)

In the diagram, P is the real power, Q is the reactive power (in this case positive), S is the complex power and the length of S is the apparent power.

Wattless power[edit source | editbeta]


Reactive power does not do any work, so it is represented as the imaginary axis of the vector diagram. Real power does do work, so it is the real axis.

The unit for all forms of power is the watt (symbol: W), but this unit is generally reserved for real power. Apparent power is conventionally expressed in volt-amperes (VA) since it is the product of rms voltage and rms current. The unit for reactive power is expressed as VAR, which stands for volt-amperes reactive. Since reactive power transfers no net energy to the load, it is sometimes called "wattless" power. It does, however, serve an important function in electrical gridsand its lack has been cited as a significant factor in the Northeast Blackout of 2003.[2] Understanding the relationship among these three quantities lies at the heart of understanding power engineering. The mathematical relationship among them can be represented by vectors or expressed using complex numbers, S = P + jQ (where j is the imaginary unit).

Power factor[edit source | editbeta]


Main article: Power factor The ratio between real power and apparent power in a circuit is called the power factor. It's a practical measure of the efficiency of a power distribution system. For two systems transmitting the same amount of real power, the system with the lower power factor will have higher circulating currents due to energy that returns to the source from energy storage in the load. These higher currents produce higher losses and reduce overall transmission efficiency. A lower power factor circuit will have a higher apparent power and higher losses for the same amount of real power. The power factor is unity (one) when the voltage and current are in phase. It is zero when the current leads or lags the voltage by 90 degrees. Power factors are usually stated as "leading" or "lagging" to show the sign of the phase angle of current with respect to voltage. Purely capacitive circuits supply reactive power with the current waveform leading the voltage waveform by 90 degrees, while purely inductive circuits absorb reactive power with the current waveform lagging the voltage waveform by 90 degrees. The result of this is that capacitive and inductive circuit elements tend to cancel each other out. Where the waveforms are purely sinusoidal, the power factor is the cosine of the phase angle () between the current and voltage sinusoid waveforms. Equipment data sheets and nameplates often will abbreviate power factor as " " for this reason. Example: The real power is 700 W and the phase angle between voltage and current is 45.6. The power factor is cos(45.6) = 0.700. The apparent power is then: 700 W / cos(45.6) = 1000 VA.[

volt-ampere reactive (var) is a unit used to measure reactive power in an AC electric power system. Reactive power exists in an AC circuit when the current and voltage are not in phase. The correct symbol is var and not Var, VAr, or VAR,[1] but all three terms are widely used. The term var was proposed by the Romanian electrical engineer Constantin

Budeanu and introduced in 1930 by the IEC in Stockholm, which has adopted it as the unit for reactive power. Vars may be considered as either the imaginary part of apparent power, or the power flowing into a reactive load, where voltage and current are specified in volts and amperes. The two definitions are equivalent.

Reactive power[edit source | editbeta]


Main article: AC power A sinusoidally alternating voltage applied to a purely resistive load results in an alternating current that is fully in phase with the voltage. However, in many applications it is common for there to be a reactive component to the system, that is, the system possesses capacitance, inductance, or both. These electrical properties cause the current to change phase with respect to the voltage: capacitance tending the current to lead the voltage in phase, and inductance to lag it. For sinusoid currents and voltages at the same frequency, reactive power in vars is the product of the RMS voltage and current, or the apparent power, multiplied by the sine of the phase angle between the voltage and the current. The reactive power , (measured in units of volt-amperes reactive or var), is given by:

where

is the phase angle between the voltage and current.

Only effective power, the actual power delivered to or consumed by the load, is expressed in watts. Imaginary power is properly expressed in volt-amperes reactive. volt-ampere (VA) is the unit used for the apparent power in an electrical circuit, equal to the product of root-mean-square (RMS) voltageand RMS current.[1] In direct current (DC) circuits, this product is equal to the real power (active power) [2] in watts. Volt-amperes are useful only in the context of alternating current (AC) circuits (sinusoidal voltages and currents of the same frequency). While both the volt-ampere (abbreviated VA) and the watt have the dimension of power (time rate of energy), they do not have the same meaning. Some devices, including Uninterruptible Power Supplies (UPSs), have ratings both for maximum voltamperes and maximum watts. The VA rating is limited by the maximum permissible current, and the watt rating by the power-handling capacity of the device. When a UPS powers equipment which presents a reactive load with a low power factor, neither limit may safely be exceeded.[3] For example, a (large) UPS system rated to deliver 400,000 volt-amperes (400 kVA) at 220 volts can deliver a current of 1818 amperes.

Power factor
From Wikipedia, the free encyclopedia
For other uses, see Power factor (pistol). The power factor of an AC electrical power system is defined as the ratio of the real power flowing to the load, to the apparent power in the circuit,[1][2] and is a dimensionless number between -1 and 1. Real power is the capacity of the circuit for performing work in a particular time. Apparent power is the product of the current and voltage of the circuit. Due to energy stored in the load and returned to the source, or due to a non-linear load that distorts the wave shape of the current drawn from the source, the apparent power will be greater than the real power. A negative power factor occurs when the device which is normally the load generates power which then flows back towards the device which is normally considered the generator.[3][4][5] In an electric power system, a load with a low power factor draws more current than a load with a high power factor for the same amount of useful power transferred. The higher currents increase the energy lost in the distribution system, and require larger wires and other equipment. Because of the costs of larger equipment and wasted energy, electrical utilities will usually charge a higher cost to industrial or commercial customers where there is a low power factor. Linear loads with low power factor (such as induction motors) can be corrected with a passive network of capacitors or inductors. Non-linear loads, such as rectifiers, distort the current drawn from the system. In such cases, active or passive power factor correction may be used to counteract the distortion and raise the power factor. The devices for correction of the power factor may be at a central substation, spread out over a distribution system, or built into power-consuming equipment.

Linear circuits[edit source | editbeta]

Instantaneous and average power calculated from AC voltage and current with a zero power factor ( , ). The blue line shows all the power is stored temporarily in the load during the first quarter cycle

and returned to the grid during the second quarter cycle, so no real power is consumed.

Instantaneous and average power calculated from AC voltage and current with a lagging power factor ( , ). The blue line shows some of the power is returned to the grid during the part of

the cycle labelled

In a purely resistive AC circuit, voltage and current waveforms are in step (or in phase), changing polarity at the same instant in each cycle. All the power entering the load is consumed (or dissipated). Where reactive loads are present, such as with capacitors orinductors, energy storage in the loads results in a time difference between the current and voltage waveforms. During each cycle of the AC voltage, extra energy, in addition to any energy consumed in the load, is temporarily stored in the load in electric or magnetic fields, and then returned to the power grid a fraction of a second later in the cycle. The "ebb and flow" of this nonproductive power increases the current in the line. Thus, a circuit with a low power factor will use higher currents to transfer a given quantity of real power than a circuit with a high power factor. A linear load does not change the shape of the waveform of the current, but may change the relative timing (phase) between voltage and current. Circuits containing purely resistive heating elements (filament lamps, cooking stoves, etc.) have a power factor of 1.0. Circuits containing inductive or capacitive elements (electric motors, solenoid valves, lamp ballasts, and others ) often have a power factor below 1.0.

Definition and calculation[edit source | editbeta]


AC power flow has the three components: real power (also known as active power) (P), measured in watts (W); apparent power (S), measured in volt-amperes (VA); and reactive power (Q), measured in reactive volt-amperes (var).[6] The power factor is defined as:

. In the case of a perfectly sinusoidal waveform, P, Q and S can be expressed as vectors that form a vector triangle such that:

If is the phase angle between the current and voltage, then the power factor is equal to the cosine of the angle, , and:

Since the units are consistent, the power factor is by definition a dimensionless number between -1 and 1. When power factor is equal to 0, the energy flow is entirely reactive, and stored energy in the load returns to the source on each cycle. When the power factor is 1, all the energy supplied by the source is consumed by the load. Power factors are usually stated as "leading" or "lagging" to show the sign of the phase angle. Capacitive loads are leading (current leads voltage), and inductive loads are lagging (current lags voltage). If a purely resistive load is connected to a power supply, current and voltage will change polarity in step, the power factor will be unity (1), and the electrical energy flows in a single direction across the network in each cycle. Inductive loads such as transformers and motors (any type of wound coil) consume reactive power with current waveform lagging the voltage. Capacitive loads such as capacitor banks or buried cable generate reactive power with current phase leading the voltage. Both types of loads will absorb energy during part of the AC cycle, which is stored in the device's magnetic or electric field, only to return this energy back to the source during the rest of the cycle. For example, to get 1 kW of real power, if the power factor is unity, 1 kVA of apparent power needs to be transferred (1 kW 1 = 1 kVA). At low values of power factor, more apparent power needs to be transferred to get the same real power. To get 1 kW of real power at 0.2 power factor, 5 kVA of apparent power needs to be transferred (1 kW 0.2 = 5 kVA). This apparent power must be produced and transmitted to the load in the conventional fashion, and is subject to the usual distributed losses in the production and transmission processes. Electrical loads consuming alternating current power consume both real power and reactive power. The vector sum of real and reactive power is the apparent power. The presence of reactive power causes the real power to be less than the apparent power, and so, the electric load has a power factor of less than 1.

Distortion power factor[edit source | editbeta]


The distortion power factor describes how the harmonic distortion of a load current decreases the average power transferred to the load.

is the total harmonic distortion of the load current. This definition assumes that the voltage stays undistorted (sinusoidal, without harmonics). This simplification is often a good approximation in practice. is the fundamental component of the current and is the total current - both are root mean square-values. The result when multiplied with the displacement power factor (DPF) is the overall, true power factor or just power factor (PF):

You might also like