The life of a lead-acid storage battery depends on the use to which it is put and on the care it receives. With good care, it will last several years; with little or no care, it may be ruined in a month. The important rules for battery care are as follows:

1. Test storage batteries periodically. Always wear eye and clothing protection to shield yourself from battery acid.

2. If a battery is completely discharged, recharge it immediately.

3. When charging a battery, select a charging rate consistent with the time available for charging. When time is available, use the normal rate indicated in the product manufacturer’s literature.

4. If it is necessary to charge a battery at a very high rate, keep a careful check on the temperature of the electrolyte and never let it exceed 110°F. If cells release gas freely, reduce the charging rate to the normal rate.

5. Never try to charge batteries to a definite specific gravity. Maintain the charge until the same specific gravity reading is indicated at three successive half-hour intervals.

6. By the regular addition of distilled water only, maintain the level of the electrolyte above the top of the separators according to the manufacturer’s specifications. Rapid deterioration of a battery will result if the electrolyte level is allowed to remain below the top of the separators. Usually, maintenance-free batteries do not require the addition of water.

7. Add distilled water immediately before recharging a lead-acid battery. In the process of charging a traditional battery, the water in the electrolyte is changed into hydrogen gas and oxygen gas that escape through the vent holes. This water must be restored so that the level of the electrolyte is maintained. Maintenance-free batteries do not experience this electrolyte loss.

8. Never use a match to provide light when checking the electrolyte level. Hydrogen and oxygen mixed together are highly volatile. The area used for recharging must be well ventilated.

9. Never disconnect the leads to a battery while it is on charge. The spark that occurs at the terminals may ignite the gas and cause an explosion. Many times, a battery is to be charged while permanently mounted in position, such as in an automobile, where the negative terminal may be connected to a frame or an engine. To reduce the chance of an explosion, the negative lead of the charger should be connected to the frame instead of to the terminal.

10. Never take a specific gravity reading just after adding distilled water to a battery. Addition of distilled water dilutes the electrolyte and lowers the specific gravity. A reading then would indicate a state of charge below the actual condition of the battery.

11. Avoid spilling electrolyte when testing a battery with a hydrometer.

12. Never add acid or electrolyte to a battery unless it has been definitely determined that some electrolyte has been lost. If it is ever necessary to prepare electrolyte, remember that acid must be added to water, and must be added slowly.

13. When placing a battery on charge, do not remove the vent plugs. The plugs prevent acid spray from reaching the top surface of the battery and allow the gases to escape as noted in number 7 previously.

14. Remove deposits that may form on the terminals of a storage battery so that the metal will not be eaten away. The presence of a greenish-white deposit on battery terminals indicates corrosion. Remove this material by thoroughly cleaning the affected parts with a wire brush. Apply a strong solution of baking soda and water to all corroded parts to neutralize any acid that remains. Wash the battery with fresh water and dry with compressed air or a cloth. Finally, coat the terminals with petroleum jelly or other suitable material.

15. Do not draw a heavy discharge current except for short intervals of time. If high current is needed for a long period, use additional batteries connected in parallel.

16. Test storage batteries more frequently in very cold weather than in warm weather. A discharged battery freezes easily.


More than 500 nuclear power plants operate around the world. Close to 300 operate pressurized water reactors (PWRs), more than 100 are built with boiling-water reactors (BWRs), about 50 use gas-cooled reactors, and the rest are heavy-water reactors.

In addition a few fast breeder reactors are in operation. These reactors are built for better utilization of uranium fuel. The modern nuclear plant size varies from 100 to 1200 MW.

Pressurized Water Reactor
The general arrangement of a power plant with a PWR is shown in Fig. 59.8(A). The
Reactor heats the water from about 550 to about 650° F. High pressure, at about 2235 psi, prevents boiling.

Pressure is maintained by a pressurizer, and the water is circulated by a pump through a heat exchanger. The heat exchanger evaporates the feedwater and generates steam, which supplies a system similar to a conventional power plant.

The advantage of this two-loop system is the separation of the potentially radioactive reactor cooling fluid from the water-steam system. The reactor core consists of fuel and control rods. Grids hold both the control and fuel rods.

The fuel rods are inserted in the grid following a predetermined pattern. The fuel elements are Zircaloy-clad rods filled with UO 2 pellets. The control rods are made of silver (80%), cadmium (5%), and indium (15%) alloy protected by stainless steel.

The reactor operation is controlled by the position of the rods. In addition, control rods are used to shut down the reactor. The rods are released and fall in the core when emergency shutdown is required.
Cooling water enters the reactor from the bottom, flows through the core, and is heated by nuclear fission.

Boiling-Water Reactor
In the BWR shown in Fig. 59.8(B), the pressure is low, about 1000 psi. The nuclear reaction heats the water directly to evaporate it and produce wet steam at about 545°F.

The remaining water is recirculated and mixed with feedwater. The steam drives a turbine that typically rotates at 1800 rpm. The rest of the plant is similar to a conventional power plant.

A typical reactor arrangement is shown in Fig. 59.9. The figure shows all the major components of a reactor. The fuel and control rod assembly is located in the lower part.

 The steam separators are above the core, and the steam dryers are at the top of the reactor. The reactor is enclosed by a concrete dome. 


There are three primary color transmission standards in use today:
• NTSC (National Television Systems Committee): Used in the United States, Canada, Central America, most of South America, and Japan. In addition, NTSC is used in various countries or possessions heavily influenced by the United States.

• PAL (Phase Alternation each Line): Used in England, most countries and possessions influenced by the British Commonwealth, many western European countries and China. Variation exists in PAL systems.

• SECAM (Sequential Color with [Avec] Memory): Used in France, countries and possessions influenced by France, the USSR (generally the former Soviet Bloc nations), and other areas influenced by Russia. The three standards are incompatible for a variety of reasons (see Benson and Whitaker, 1991). Television transmitters in the United States operate in three frequency bands:

• Low-band VHF (very high frequency), channels 2 through 6

• High-band VHF, channels 7 through 13

• UHF (ultra-high frequency), channels 14 through 83 (UHF channels 70 through 83 currently are assigned to mobile radio services)

Maximum power output limits are specified by the FCC for each type of service. The maximum effective radiated power (ERP) for low-band VHF is 100 kW; for high-band VHF it is 316 kW; and for UHF it is 5 MW.

The ERP of a station is a function of transmitter power output (TPO) and antenna gain. ERP is determined by multiplying these two quantities together and subtracting transmission line loss.

The second major factor that affects the coverage area of a TV station is antenna height, known in the broadcast industry as height above average terrain (HAAT). HAAT takes into consideration the effects of the geography in the vicinity of the transmitting tower.

The maximum HAAT permitted by the FCC for a low- or high-band VHF station is 1000 ft (305 m) east of the Mississippi River and 2000 ft (610 m) west of the Mississippi. UHF stations are permitted to operate with a maximum HAAT of 2000 ft (610 m) anywhere in the United States (including Alaska and Hawaii).

The ratio of visual output power to aural output power can vary from one installation to another; however, the aural is typically operated at between 10 and 20% of the visual power. This difference is the result of the reception characteristics of the two signals.

Much greater signal strength is required at the consumer’s receiver to recover the visual portion of the transmission than the aural portion. The aural power output is intended to be sufficient for good reception at the fringe of the station’s coverage area but not beyond. It is of no use for a consumer to be able to receive a TV station’s audio signal but not the video.

In addition to high power stations, two classifications of low-power TV stations have been established by the FCC to meet certain community needs: They are:
• Translator: A low-power system that rebroadcasts the signal of another station on a different channel. Translators are designed to provide “fill-in” coverage for a station that cannot reach a particular community because of the local terrain. Translators operating in the VHF band are limited to 100 W power output (ERP), and UHF translators are limited to 1 kW.

• Low-Power Television (LPTV): A service established by the FCC designed to meet the special needs of particular communities. LPTV stations operating on VHF frequencies are limited to 100 W ERP, and UHF stations are limited to 1 kW. LPTV stations originate their own programming and can be assigned by the FCC to any channel, as long as sufficient protection against interference to a full-power station is afforded.


Frequency-modulation (FM) broadcasting refers to the transmission of voice and music received by the general public in the 88- to 108-MHz frequency band. FM is used to provide higher-fidelity reception than is available with standard broadcast AM.

In 1961 stereophonic broadcasting was introduced with the addition of a double sideband suppressed carrier for transmission of a left-minus-right difference signal. The left-plus-right sum channel is sent with use of normal FM.

Some FM broadcast systems also include a subsidiary communications authorization (SCA) subcarrier for private commercial uses. FM broadcast is typically limited to line-of-sight ranges. As a result, FM coverage is localized to a range of 75 mi (120 km) depending on the antenna height and ERP.

Frequency Allocations
The 100 carrier frequencies for FM broadcast range from 88.1 to 107.9 MHz and are equally spaced every 200 kHz. The channels from 88.1 to 91.9 MHz are reserved for educational and noncommercial broadcasting and those from 92.1 to 107.9 MHz for commercial broadcasting. Each channel has a 200 kHz bandwidth.

The maximum frequency swing under normal conditions is ±75 kHz. Stations operating with an SCA may under certain conditions exceed this level, but in no event may exceed a frequency swing of ±82.5 kHz. The carrier frequency is required to be maintained within ±2000 Hz. The frequencies used for FM broadcasting generally limit the coverage to the line-of-sight or a slightly greater distance.

The actual coverage area is determined by the ERP of the station and the height of the transmitting antenna above the average terrain in the area. Either increasing the power or raising the antenna will increase the coverage area.

Station Classifications
In FM broadcast, stations are classified according to their maximum allowable ERP and the transmitting antenna height above average terrain in their service area. Class A stations provide primary service to a radius of about28 km with 6000 W of ERP at a maximum height of 100 m.

The most powerful class, Class C, operates with maximums of 100,000 W of ERP and heights up to 600 m with a primary coverage radius of over 92 km. The powers and heights above average terrain (HAAT) for all of the classes are shown in Table 69.5.

All classes may operate at antenna heights above those specified but must reduce the ERP accordingly. Stations may not exceed the maximum power specified, even if antenna height is reduced. The classification of the station determines the allowable distance to other co-channel and adjacent channel stations.

Field Strength and Propagation
The field strength produced by an FM broadcast station depends on the ERP, antenna heights, local terrain, tropospheric scattering conditions, and other factors. A factor in the determination of new licenses for FM broadcast is the separation between allocated co-channel and adjacent channel stations, the class of station, and the antenna heights.

Although FM broadcast propagation is generally thought of as line-of-sight, larger ERPs along with the effects of diffraction, refraction, and tropospheric scatter allow coverage slightly greater than line-of sight.

FM broadcast transmitters typically range in power output from 10 W to 50 kW.  The highest-powered solid-state transmitters are currently 10 kW, but manufacturers are developing new devices that will make higher-power solid-state transmitters both cost-efficient and reliable.

Antenna Systems
FM broadcast antenna systems are required to have a horizontally polarized component. Most antenna systems, however, are circularly polarized, having both horizontal and vertical components. The antenna system, which usually consists of several individual radiating bays fed as a phased array, has a radiation characteristic that concentrates the transmitted energy in the horizontal plane toward the population to be served, minimizing the radiation out into space and down toward the ground.

Thus, the ERP towards the horizon is increased with gains up to 10 dB. This means that a 5-kW transmitter coupled to an antenna system with a 10-dB gain would have an ERP of 50 kW. Directional antennas may be employed to avoid interference with other stations or to meet spacing requirements.


After a disturbance, due usually to a network fault, the synchronous machine’s electrical loading changes and the machines speed up (under very light loading conditions they can slow down). Each machine will react differently depending on its proximity to the fault, its initial loading and its time constants.

This means that the angular positions of the rotors relative to each other change. If any angle exceeds a certain threshold (usually between 140° and 160°) the machine will no longer be able to maintain synchronism. This almost always results in its removal from service.

Early work on transient stability had concentrated on the reaction of one synchronous machine coupled to a very large system through a transmission line. The large system can be assumed to be infinite with respect to the single machine and hence can be modeled as a pure voltage source. The synchronous machine is modeled by the three phase windings of the stator plus windings on the rotor representing the field winding and the eddy current paths.

These are resolved into two axes, one in line with the direct axis of the rotor and the other in line with the quadrature axis situated 90° (electrical) from the direct axis. The field winding is on the direct axis. Equations can be developed which determine the voltage in any winding depending on the current flows in all the other windings.

A full set of differential equations can be produced which allows the response of the machine to various electrical disturbances to be found. The variables must include rotor angle and rotor speed which can be evaluated from knowledge of the power from the turbine into, and power to the system out of the machine.

The great disadvantage with this type of analysis is that the rotor position is constantly changing as it rotates. As most of the equations involve trigonometrical functions relating to stator and rotor windings, the matrices must be constantly reevaluated. In the most severe cases of network faults the results, once the dc transients decay, are balanced.

Further, on removal of the fault the network is considered to be balanced. There is thus much computational effort involved in obtaining detailed information for each of the three phases which is of little value to the power system engineer. By contrast, this type of analysis is very important to machine designers.

However, programs have been written for multi-machine systems using this method. Several power system catastrophes in the U.S. and Europe in the 1960s gave a major boost to developing transient stability programs. What was required was a simpler and more efficient method of representing the machines in large power systems.

Initially, transient stability programs all ran in the time domain. A set of differential equations is developed to describe the dynamic behavior of the synchronous machines. These are linked together by algebraic equations for the network and any other part of the system that has a very fast response, i.e., an insignificant time constant, relative to the synchronous machines. All the machine equations are written in the direct and quadrature axes of the rotor so that they are constant regardless of the rotor position.

The network is written in the real and imaginary axes similar to that used by the load flow and faults programs. The transposition between these axes only requires knowledge of the rotor angle relative to the synchronously rotating frame of reference of the network.

Later work involved looking at the response of the system, not to major disturbances but to the build-up of oscillations due to small disturbances and poorly set control systems. As the time involved for these disturbances to occur can be large, time domain solutions are not suitable and frequency domain models of the system were produced.

Lyapunov functions have also been used, but good models have been difficult to produce. However, they are now of sufficiently good quality to compete with time domain models where quick estimates of stability are needed such as in the day to day operation of a system.


Load Flow (Power Flow)
The need to know the flow patterns and voltage profiles in a network was the driving force behind the development of load flow programs. Although the network is linear, load flow analysis is iterative because of nodal (busbar) constraints.

At most busbars the active and reactive powers being delivered to customers are known but the voltage level is not. As far as the load flow analysis is concerned, these busbars are referred to as PQ buses. The generators are scheduled to deliver a specific active power to the system and usually the voltage magnitude of the generator terminals is fixed by automatic voltage regulation.

These busbars are known as PV buses. As losses in the system cannot be determined before the load flow solution, one generator busbar only has its voltage magnitude specified. In order to give the required two specifications per node, this bus also has its voltage angle defined to some arbitrary value, usually zero.

This busbar is known as the slack bus. The slack bus is a mathematical requirement for the program and has no exact equivalent in reality. However, in operating practice, the total load plus the losses are not known. When a system is not in power balance, i.e., when the input power does not equal the load power plus losses, the imbalance modifies the rotational energy stored in the system.

The system frequency thus rises if the input power is too large and falls if the input power is too little. Usually a generating station and probably one machine is given the task of keeping the frequency constant by varying the input power. This control of the power entering a node can be seen to be similar to the slack bus.

The algorithms first adopted had the advantages of simple programming and minimum storage but were slow to converge requiring many iterations. The introduction of ordered elimination, which gives implicit inversion of the network matrix, and sparsity programming techniques, which reduces storage requirements, allowed much better algorithms to be used.

The Newton-Raphson method gave convergence to the solution in only a few iterations. Using Newtonian methods of specifying the problem, a Jacobian matrix containing the partial derivatives of the system at each node can be constructed. The solution by this method has quadratic convergence. This method was followed quite quickly by the Fast Decoupled Newton-Raphson method.

This exploited the fact that under normal operating conditions, and providing that the network is predominately reactive, the voltage angles are not affected by reactive power flow and voltage magnitudes are not affected by real power flow.

The Fast Decoupled method requires more iterations to converge but each iteration uses less computational effort than the Newton Raphson method. A further advantage of this method is the robustness of the algorithm.
Further refinements can be added to a load flow program to make it give more realistic results. Transformer on-load tap changers, voltage limits, active and reactive power limits, plus control of the voltage magnitudes at buses other than the local bus help to bring the results close to reality. Application of these limits can slow down convergence.

The problem of obtaining an accurate, load flow solution, with a guaranteed and fast convergence has resulted in more technical papers than any other analysis topic. This is understandable when it is realized that the load flow solution is required during the running of many other types of power system analyses.

While improvements have been made, there has been no major breakthrough in performance. It is doubtful if such an achievement is possible as the time required to prepare the data and process the results represents a significant part of the overall time of the analysis.


DESCRIPTION Specification
Nominal Voltage : 69 kV
Manufacturer : xxx (your choice)
Type: SSB-III-72.5
Construction: Horizontal Double Side Break Disconnector
Installation: Horizontal/Vertical
Poles : 3 pole disconnect
Operating Mechanism: Motor Operated Drive (for Outdoor) or Manual   (Depende sa inyo)
Design Voltage: 72.5 kV
B.I.L. : 350 kV
Operating Frequency: 60 Hz
Continuous Current : 2,000.00 A
Momentary Current : 51 kA  (ANSI)
Short Circuit Capacity: 40 kA
Peak Withstand Current: 100 kA (IEC)
Accessories: Complete with 2 insulators per pole, Steel Channel, IEC std. Fittings


Electrical power systems are, in general, fairly complex systems composed of a wide range of equipment devoted to generating, transmitting, and distributing electrical power to various consumption centers. The very complexity of these systems suggests that failures are unavoidable, no matter how carefully these systems have been designed.

 The feasibility of designing and operating a system with zero failure rate is, if not unrealistic, economically unjustifiable. Within the context of short-circuit analysis, system failures manifest themselves as insulation breakdowns that may lead to one of the following phenomena:

— Undesirable current flow patterns
— Appearance of currents of excessive magnitudes that could lead to equipment damage and downtime
— Excessive overvoltages, of the transient and/or sustained nature, that compromises the integrity and reliability of various insulated parts
— Voltage depressions in the vicinity of the fault that could adversely affect the operation of rotating equipment
— Creation of system conditions that could prove hazardous to personnel

Because short circuits cannot always be prevented, we can only attempt to mitigate and to a certain extent contain their potentially damaging effects. One should, at first, aim to design the system so that the likelihood of the occurrence of the short circuit becomes small.

If a short circuit occurs, however, mitigating its effects consists of a) managing the magnitude of the undesirable fault currents, and b) isolating the smallest possible portion of the system around the area of the mishap in order to retain service to the rest of the system. A significant part of system protection is devoted to detecting short-circuit conditions in a reliable fashion.

Considerable capital investment is required in interrupting equipment at all voltage levels that is capable of withstanding the fault currents and isolating the faulted area. It follows, therefore, that the main reasons for performing short-circuit studies are the following:

— Verification of the adequacy of existing interrupting equipment. The same type of studies will form the basis for the selection of the interrupting equipment for system planning purposes.

— Determination of the system protective device settings, which is done primarily by quantities characterizing the system under fault conditions. These quantities also referred to as “protection handles,” typically include phase and sequence currents or voltages and rates of changes of system currents or voltages.

— Determination of the effects of the fault currents on various system components such as cables, lines, busways, transformers, and reactors during the time the fault persists. Thermal and mechanical stresses from the resulting fault currents should always be compared with the corresponding short-term, usually first-cycle, withstand capabilities of the system equipment.

— Assessment of the effect that different kinds of short circuits of varying severity may have on the overall system voltage profile. These studies will identify areas in the system for which faults can result in unacceptably widespread voltage depressions.

— Conceptualization, design and refinement of system layout, neutral grounding, and substation grounding. Compliance with codes and regulations governing system design and operation, such as the National Electrical Code® (NEC®) (NFPA 70-1996) [B6],1 article 110-9.


For low-voltage circuit protection in the U.S., circuit breaker designs and tests are based on the requirements of three standards organizations; the American National Standards Institute (ANSI), Underwriters Laboratories (UL), and the National Electrical Manufacturers Association (NEMA).

The two classifications of circuit breakers these organizations defined are as follows:

- Molded-case circuit breaker class
- Low-voltage power circuit breaker class

Three types of circuit breakers are based on the two classifications above. The classifications themselves lend their names to the first two of the three types, while the third type, derived from the molded-case circuit breaker class, is known as an insulated-case circuit breaker.

The three types of circuit breakers are as follows:
Molded-case circuit breakers (MCCBs)
Low-voltage power circuit breakers (LVPCBs)
Insulated-case circuit breakers (ICCBs)

The following are some of the salient features of these types of circuit breakers. MCCBs, as a class, are those tested and rated according to UL 489-1991 and whose current carrying parts, mechanisms, and trip devices are completely contained within a molded case of insulating material. MCCBs are available in the widest range of sizes, from the smallest (15 A or less) to the largest (6000 A), and with various interrupting ratings for each frame size.

They are characterized generally by fast interruption short-circuit elements. With electronic trip units they can have limited short-delay and ground-fault sensing capability.

Virtually all MCCBs interrupt fast enough to limit the amount of prospective fault current let through and some are fast enough and limiting enough to be identified as current-limiting circuit breakers.

 MCCBs are not designed to be held maintainable. ICCBs are also rated and tested according to UL 489-1991. However, they utilize characteristics of design from both the power and molded-case classes. They are of the larger frame sizes, fast in interruption, but normally not fast enough to qualify as current-limiting circuit breakers.

ICCBs also utilize electronic trip units and can have short-time ratings and ground fault current sensing. They utilize stored energy operating mechanisms similar to those designed for LVPCBs and their design is such that they are partially held maintainable. LVPCBs are rated and tested to satisfy ANSI C37 standard requirements and are used primarily in drawout switchgear. They are generally characterized as being the largest in physical size.

They have short-time ratings, but they are not fast enough in interruption to qualify as current limiting. LVPCBs are designed to be maintainable in the held. The ANSI C37 series of standards and UL 489-1991 were jointly developed by IEEE and NEMA and apply to LVPCBs and ICCBs/MCCBs, respectively.


Static electrification (SE) in transformers is an interfacial phenomenon, which involves oil, paper and transformer board. Its physical mechanism involves a source of charge and region of excessive charge accumulation.

Extensive investigations about this phenomena have been made during recent years. 25 When oil is forced through the tank and coolers, it acquires an electrostatic charge, i.e. it contains an equal number of positive and negative ions.

When the oil passes the paper and solid insulation in the windings, the insulation becomes negatively charged and the oil positively charged with the charge separation occurring at the oil-insulated interface (Fig. 6.16).

The earliest reports on this phenomenon were in the 1970s from Japan, where number of h.v. large transformer failures occurred. And later quite a few SE-related incidents were also reported in the USA and other countries.

It is believed that transformers of large rating (e.g. >100MVA) are most likely affected by SE because they possess greater amounts of insulation and require larger oil flow volumes than transformers of smaller ratings.

 As different oils have different electrostatic charging tendencies (ECT), oil additives might be a way to reduce oil ECT. As an alternative to the additive, used oil can be regenerated because new oil exhibits a lower ECT than aged oil.

On the other hand, operation practices are also of great importance. SE incidents can be caused by poor operating practices such as increasing forced oil cooling capacity beyond manufacturer’s recommendations, or having more forced oil cooling in operation than the load on the transformer justifies.


Many electric utilities have employed equipment condition monitoring (ECM) to maintain electric equipment in top operating condition while minimizing the number of interruptions.

With ECM, equipment operating parameters are automatically tracked to detect the emergence of various abnormal operating conditions.

This allows substation operations personnel to take timely action when needed to improve reliability and extend equipment life.

This approach is applied most frequently to substation transformers and high-voltage electric supply circuit breakers to minimize the maintenance costs of these devices, to improve their availability, and to extend their useful life.

Equipment availability and reliability can be improved by reducing the amount of off-line maintenance and testing required and by reducing the number of equipment failures.

To be truly effective, equipment condition monitoring should be part of an overall condition-based maintenance strategy that has been properly designed and integrated into the regular maintenance program.

ECM IEDs are being implemented by many utilities. In most implementations, the communication link to the IED is via a dial-up telephone line.

To facilitate integrating these IEDs into the substation architecture, the ECM IEDs must support at least one of today’s widely used IED protocols: Modbus, Modbus Plus, or DNP3 (distributed network protocol). In addition, a migration path to UCA is desired.

If the ECM IEDs can be integrated into the substation architecture, the operational data will have a path to the SCADA system, and the nonoperational data will have a path to the utility’s data warehouse. In this way, the users and systems throughout the utility that need this information will have access to it.

Once the information is brought out of the substation and into the SCADA system and data warehouse, users can share the information in the utility. The “private” databases that result in islands of automation will go away.

Therefore, the goal of every utility is to integrate these ECM IEDs into a standard substation integration architecture so that both operational and nonoperational information from the IEDs can be shared by utility users.


Since its first proposal in 1966 the economics behind optical fiber technology have changed radically. The major components within the communications system comprise the fiber (and the resulting cable), the connections and the opto-electronic conversion equipment necessary to convert the electrical signal to light and vice versa.

In the early years of optical transmission the relatively high cost of the above items had to be balanced by the savings achieved within the remainder of the system. In the case of telecommunications these othervsavings were generated by the removal of repeater/regenerator stations.

Thus the concept of ‘break-even’ distance grew rapidly and was broadly defined as the distance at which the total cost of a copper system would be equivalent to that of the optical fiber alternative. For systems in excess of that length the optical option would offer overall cost savings whereas shorter-haul systems would favour copper – unless other technical factors overrode that choice.

It is not surprising therefore that long-range telecommunications was the first user group to seriously consider the optical medium. Similarly the technology was an obvious candidate in the area of long-range video transmission (motorway surveillance, cable and satellite TV distribution). The cost advantages were immediately apparent and practical applications were soon forthcoming.

Based upon the volume production of cable and connectors for the telecommunications market the inevitable cost reductions tended to reduce the ‘break-even’ distance. When the argument is purely on cost grounds it is a relatively straightforward decision.

Unfortunately even when the cost of cabling is fairly matched between copper and fiber optics the additional cost of optoelectronic converters cannot be ignored. Until certain key criteria are met the complete domination of data communications by optical fiber cannot be achieved or even expected.

These criteria are as follows:
• standardization of fiber type such that telecommunications product can be used in all application areas;
• reductions in the cost of opto-electronic converters based upon large volume usage;
• a widespread requirement for the data transmission at speeds which increase the cost of the copper medium or, in the extreme, preclude the use of copper totally.

These three milestones are rapidly being approached; the first two by the application of fiber to the telecommunications subscriber loop (to the home) whilst the third is more frequently encountered due to vastly increased needs for services.

Meanwhile the economics of fiber optic cabling dictate that while ‘break-even’ distances have decreased the widespread use of ‘fiber-to-thedesk’ is still some time away. There is a popular misconception in the press that the ‘fiber optic revolution’ has not yet occurred. It is evidently assumed that the revolution is an overnight occurrence that miraculously converts every copper cabling installation to optical fiber. This is rather unfortunate propaganda and, to a great extent, both untrue and unrealistic.


The diesel, or compression-ignition, engine is one of the four principal types of internal combustion engine; that is, it is a machine that converts the chemical energy released from the burning of a fuel in an internal combustion chamber directly to mechanical energy.

Although the diesel is a reciprocating machine, its mechanical energy is transferred from the engine by means of a rotating shaft that may be used to drive other mechanical, hydraulic, pneumatic, or electrical machines and equipment.

Worldwide there are many diesel engines manufacturers, and the engine types available range from extremely powerful low-speed two-stroke engines of up to 70 MW to high-speed automotive-type engines to low-power portable units of less than 2 kW (1.492 hp).

In industrial and marine applications, diesel engines are used mainly in the generation of electrical power, both ac and dc. In this article the topics addressed are the diesel engine itself and the production of ac power by diesel-powered generators.

The main uses of diesel-generators are:

1. For base-load duties in locations where there is no utility supply—that is, usually in remote locations, on islands, or on ships and submarines.

2. As independent power sources where it is essential to ensure that a continuous supply of electrical power of acceptable quality is maintained at all times. Such systems are usually referred to as uninterruptable power systems or no break systems.

3. For ‘‘peak-lopping’’ or ‘‘peak-shaving’’ duties to limit the maximum or peak demand from a utility supply and so reduce the premium unit charge rate and hence the overall cost of the supply.

4. As standby or emergency power generation in case of major failure (blackouts) or partial shutdowns (brownouts) of the main or utility supply. (Such units are common in telecommunication centers, hospitals, mainframe centers, major financial institutions, and government buildings.)

5. Transportable (usually trailer mounted) generation units for providing temporary increases in the main supply especially in remote areas.

6. As part of a cogeneration, sometimes titled CHP (combined heat and power), plant.


The main advantages of using diesel driven electrical power generators are (not in rank order):

1. Performance.  Diesel engines normally have high thermal efficiencies, in the region of 40% and higher, almost regardless of their size. Some current state-of-the-art engines can achieve efficiencies over 50%, and engine manufacturers have forecast efficiencies as high as 60% by the twenty-first century.

2. Maintenance.  Diesels represent mature and well-developed technology and are comparatively easy to maintain on site without the need for fully skilled personnel except for certain nonroutine tasks.

3. Durability and Reliability.  Diesels have long lifetimes in the range, on average, of at least 20 to 25 years, and they can operate 7000 to 8000 h per year and in some cases up to 12,000 h between regular major overhauls.

4. Fuel Efficiency.  In most power-generation applications, diesels have the most competitive fuel consumption rates, and between half-load and full-load their fuel consumption rate is reasonably constant. Depending upon the application, size of engine, loading, and the operating environment, diesel engines normally have a specific fuel consumption in the range 160 to 360 g/kWh. The new Sulzer Diesel RTA two-stroke engines are claimed to be able to produce up to 35,431 kW (47,520 bhp) with a specific fuel consumption as low as 154 g/kWh (115 g/bhp).

5. Transportability.  Diesel-generators can be transported on purpose-built trucks or in specially equipped containers by land, sea, or air so that they can be used immediately on arriving on-site even in remote areas. For their physical weight and size, they can generate large amounts of electrical energy, sufficient to supply a small town.

6. Cost.  The cost per unit power installed is very competitive, but it must be emphasized that in costing diesel-power generation it is crucial to determine the total installed costs, not simply the capital cost of the engine and the generator. As a general rule of thumb, the speed of crankshaft rotation basically determines the weight, size, and cost of an engine in relation to its output power.

7. Operational Flexibility.  Diesels can use a wide variety of fuel quality and can be designed to use both liquid and gaseous fuels; that is, they are ‘‘dual–fuel’’ engines. They can also be adopted for use in cogeneration and total-energy systems and in ‘‘non-air’’ environments.

8. Environmentally Compliant.  Diesels inherently produce low amounts of harmful exhaust emissions. However, in recent years, engines have had to be redesigned and exhaust-emissions treatment systems upgraded to meet increasingly stringent regulations. It is certain that further advances in the efficacy of emission reduction techniques will be required for all fossil-fuel power systems in the future.


Safety systems protect life and property from damage or loss due to accidents. For equipment, the degree of protection should be based on the value and criticality of the facility.

Personnel safety is covered rigorously in the NEC and many other standards. Defining this degree requires an in-depth knowledge of the installation and its function.

The following questions should be considered when designing these systems:

a) How long will it take to replace the equipment and at what cost?
b) Can the function of the facility be performed elsewhere?
c) Loss of what key component would result in operation interruptions?

Safety systems can be as simple as a manually operated emergency power-off button, or as complex as a fully interlocked system. However, the more complex a fully integrated system becomes, the higher the probability of system confusion or failure.

Typical systems include the following functions:
— Smoke and fire protection
— Environmental control
— Smoke exhaust
— Fire extinguishing
— Emergency lighting
— Security

The interfacing of a safety system is generally unique for each installation and requires a logical design approach. Through a well-defined logic matrix and sequence priorities, it is possible to develop a system that can be maintained, modified, or expanded with little confusion and minimum expense.

Generally, safety systems operate from 120 V ac, 24 V ac, or 24 V and 12 V dc. In any case, these systems must remain powered at all times. The quality of the power supplied to these systems is as important as that of the power delivered to the IT system.

Disturbances in the power supply of the safety system can cause shutdown of the protected system.


Road vehicles emit significant air-borne pollution, including 18% of America’s suspended particulates, 27% of the volatile organic compounds, 28% of Pb, 32% of nitrogen oxides, and 62% of CO. Vehicles also release 25% of America’s energy-related CO2, the principle greenhouse gas.  World pollution numbers continue to grow even more rapidly as millions of people gain access to public and personal transportation.

Electrification of our energy economy and the rise of automotive transportation are two of the most significant technological revolutions of the twentieth century. Exemplifying this massive change in the lifestyle due to growth in fossil energy supplies.

From negligible energy markets in the 1900, electrical generation now accounts for 34% of the primary energy consumption in the United States, while transportation consumes 27% of the energy supply. Increased fossil fuel use has financed energy expansions: coal and natural gas provide more than 65% of the energy used to generate the nation’s electricity, while refined crude oil fuels virtually all the 250 million vehicles now cruising the U.S. roadways. Renewable energy, however, provides less than 2% of the energy used in either market.

The electricity and transportation energy revolution of the 1900s has affected several different and large non-overlapping markets. Electricity is used extensively in the commercial, industrial and residential sectors, but it barely supplies an iota of energy to the transportation markets. On the other hand oil contributes only 3% of the energy input for electricity.

Oil usage for the purpose of transportation contributes to merely 3% of the energy input for electricity. Oil use for transportation is large and growing. More than two-thirds of the oil consumption in the United States is used for transportation purposes, mostly for cars, trucks, and buses.

With aircraft attributing to 14% of the oil consumption, ships and locomotives consume the remaining 5%. Since the United States relies on oil imports, the oil use for transportation sector has surpassed total domestic oil production every year since 1986.

The present rate of reliance and consumption of fossil fuels for electrification or transportation is 100,000 times faster than the rate at which they are being created by natural forces. As the readily exploited fuels continue to be consumed, the fossil fuels are becoming more costly and difficult to extract.

In order to transform the demands on the development of energy systems based on renewable resources, it is important to find an alternative to fossil fuels. Little progress has been made in using electricity generated from a centralized power grid for transportation purposes. In 1900, the number of electric cars outnumbered the gasoline cars by almost a factor of two.

In addition to being less polluting, the electric cars in 1900 were silent machines. As favorites of the
urban social elite, the electric cars were the cars of choice as they did not require the difficult and rather dangerous handcrank starters. This led to the development of electric vehicles (EVs) by more than 100EV manufacturers.

However, the weight of these vehicles, long recharging time, and poor durability of electric barriers reduced the ability of electric cars to gain a long-term market presence. One pound of gasoline contained a chemical energy equivalent of 100 pounds of Pb-acid batteries.

Refueling the car with gasoline required only minutes, supplies of gasoline seemed to be limitless, and the long distance delivery of goods and passengers was relatively cheap and easy. This led to the virtual disappearance of electric cars by 1920.


A number of types of security challenges to which SCADA systems may be vulnerable are recognized in the industry. The list includes:

• Authorization violation: an authorized user performing functions beyond his level of authority

• Eavesdropping: gleaning unauthorized information by listening to unprotected communications

• Information leakage: authorized users sharing information with unauthorized parties

• Intercept/alter: an attacker inserting himself (either logically or physically) into a data connection and then intercepting and modifying messages for his own purposes

• Masquerade (“spoofing”): an intruder pretending to be an authorized entity and thereby gaining access to a system

• Replay: an intruder recording a legitimate message and replaying it back at an inopportune time.

An often-quoted example is recording the radio transmission used to activate public safety warning sirens during a test transmission and then replaying the message sometime later.

An attack of this type does not require more than very rudimentary understanding of the communication protocol.

• Denial of service attack: an intruder attacking a system by consuming a critical system resource such that legitimate users are never or infrequently serviced.


Investigations of threats to corporate computer hardware and software systems traditionally have shown that the greatest number of attacks come from internal sources. Substation control systems and IEDs are different in that information about them is less well known to the general public.

However, the hardware, software, architecture, and communication protocols for substations are well known to the utilities, equipment suppliers, contractors, and consultants throughout the industry. Often, the suppliers of hardware, software, and services to the utility industry share the same level of trust and access as the utility individuals themselves.

Consequently, the concept of an insider is even more encompassing. A utility employee knows how to access the utility’s computer systems to gather information or cause damage, and also has the necessary access rights (keys and passwords).

The utility must protect itself against disgruntled employees who seek to cause damage as well as employees who are motivated by the prospect of financial gain. Computer-based systems at substations have data of value to a utility’s competitors as well as data of value to the competitors of utility customers (e.g., the electric load of an industrial plant).

Corporate employees have been bribed in the past to provide interested parties with valuable information; we have to expect that this situation will also apply to utility employees with access to substation systems. Furthermore, we cannot rule out the possibility of an employee being bribed or blackmailed to cause physical damage, or to disclose secrets that will allow other parties to cause damage.

A second potential threat comes from employees of suppliers of substation equipment. These employees also have the knowledge that enables them to access or damage substation assets. And often they have access as well. One access path is from the diagnostic port of substation monitoring and control equipment.

It is often the case that the manufacturer of a substation device has the ability to establish a link with the device for the purpose of performing diagnostics via telephone and modem (either via the Internet or else by calling the device using the public switched telephone network).

An unscrupulous employee of the manufacturer could use this link to cause damage or gather confidential information. Additionally, an open link can be accessed by an unscrupulous hacker to obtain unauthorized access to a system. This has occurred frequently in other industries.

Another pathway for employees of the utility or of equipment suppliers to illicitly access computer-based substation equipment is via the communications paths into the substation.

A third threat is from the general public. The potential intruder might be a hacker who is simply browsing and probing for weak links or who possibly wants to demonstrate his prowess at penetrating corporate defenses.

Or the threat might originate from an individual who has some grievance against the utility or against society in general and is motivated to cause some damage. The utility should not underestimate the motivation of an individual outsider or amount of time that someone might dedicate to investigating vulnerabilities in the utility’s defenses.

A fourth threat is posed by criminals who attempt to extort money (by threatening to do damage) or to gain access to confidential corporate records, such as maintained in the customer database, for sale or use.

The fifth, and arguably the most serious, threat is from terrorists or hostile foreign powers. These antagonists have the resources to mount a serious attack. Moreover, they can be quite knowledgeable, since the computer-based systems that outfit a substation are sold worldwide with minimal export restrictions, and documentation and operational training is provided to the purchaser.

The danger from an organized hostile power is multiplied by the likelihood that an attack, if mounted, would occur in many places simultaneously and would presumably be coupled with other cyber, physical, or biological attacks aimed at crippling the response capabilities.


Measurements are normally carried out using the Wenner method and the data is used to arrive at a representative soil model for the site.

Whilst the measurements would best be carried out in representative weather conditions, this is clearly not always possible, so allowance for seasonal effects may need to be made in the model.

This would normally be done by modifying the resistivity and/or depth of the surface layer. Some typical soil resistivity values are shown in Table 8.2.

Measurements are taken for a range of probe separations, each of which is a general indicator of the depth to which the value applies. Measurements in a number of directions would be taken and averaged values (excluding obvious errors) for each separation distance would be used to derive the initial soil model.

A number of computer programmes are commercially available and used to translate the data into a representative soil model. It is useful to have both the average model and the data spread, so that the error band is known, as this will influence the subsequent calculations or suggest that the derived soil model be modified to improve its accuracy.

It is possible to use formulae or graphical methods to derive a two layer model. The formula below compares the resistivity, p1 of the upper layer of depth h1 with the lower layer of resistivity, p2:

The value ps is the resistivity measured at a depth a. IEEE 80 includes a number of graphs to achieve the same result, based on the work of Sunde.

It is unusual to use formulae now, because the interactive computer programmes available can quickly provide a model which may have a number of vertical or horizontal interfaces. Often a three layer model is necessary to provide sufficient accuracy.

The soil model values are used in formula or a computer programme to calculate the earth resistance and hazard voltages.


Choosing the most appropriate method of cooling for a particular application is a common problem in transformer specification. No clear rules can be given, but the following guidance for mineral oil-immersed transformers may help. The basic questions to consider are as follows:

1. Is capital cost a prime consideration?
2. Are maintenance procedures satisfactory?
3. Will the transformer be used on its own or in parallel with other units?
4. Is physical size critical?

This type of cooling has no mechanical moving parts and therefore requires little, if any, maintenance. Many developing countries prefer this type because of reliability, but there is an increasing cost penalty as sizes increase.

A transformer supplied with fans fitted to the radiators will have a rating, with fans in operation, of probably between 15% and 33% greater than with the fans not in operation. The transformer therefore has an effective dual rating under ONAN and ONAF conditions.

The transformer might be specified as 20/25MVA ONAN/ONAF. The increased output under ONAF conditions is reliably and cheaply obtained.

Applying an ONAN/ONAF transformer in a situation where the ONAF rating is required most of the time is undesirable since reliance is placed on fan operation. Where a ‘firm’ supply is derived from two transformers operating in parallel on a load-sharing basis the normal load is well inside the ONAN rating and the fans would only run in the rare event of one transformer being out of service.

Such an application would exploit the cost saving of the ONAF design without placing too much emphasis on the reliable operation of the fans. Note that fans create noise and additional noise mitigating precautions may be needed in environmentally sensitive areas.

Forcing the oil circulation and blowing air over the radiators will normally achieve a smaller, cheaper transformer than either ONAF or ONAN. Generally speaking, the larger the rating required the greater the benefits.

However, the maintenance burden is increased owing to the oil pumps, motors and radiator fans required. Application in attended sites, with good maintenance procedures, is generally satisfactory. Generator transformers and power station interbus transformers will often use OFAF cooling.

These are specialized cooling categories where the oil is ‘directed’ by pumps into the closest proximity possible to the winding conductors. The external cooling medium can be air or water.

Because of the design, operation of the oil pumps, cooling fans, or water pumps is crucial to the rating obtainable and such transformers may have rather poor naturally cooled (ONAN) ratings. Such directed and forced cooling results in a compact and economical design suitable for use in well-maintained environments.


Three phase windings of transformers will normally be connected in a delta configuration, a star (wye) configuration, or, less commonly, in an interconnected star (zig-zag) configuration as shown in Fig. 14.16. The vector grouping and phase relationship nomenclature used is as follows:

• Capital letters for primary winding vector group designation.
• Small letters for secondary winding group designation.
• D or d represents a primary or secondary delta winding.
• Y or y represents a primary or secondary star winding.
• Z or z represents a primary or secondary interconnected star winding.
• N or n indicates primary or secondary winding with an earth connection to the star point.
• Numbers represent the phase relationship between the primary and secondary windings. 

The secondary to primary voltage displacement angles are given in accordance with the position of the ‘hands’ on a clock relative to the mid-day or twelve o’clock position. Thus 1 (representing one o’clock) is 30°, 3 is  90°, 11 is 30° and so on.

Therefore a Dy1 vector grouping indicates that the secondary red phase star voltage vector, Vrn, is at the one o’clock position and therefore lags the primary red phase delta voltage vector, Vm, at the twelve o’clock position by 30°, i.e. the one o’clock position is 30° lagging the primary twelve o’clock position for conventional anti-clockwise vector rotation.

Similarly a Dyn11 vector grouping indicates that the secondary red phase voltage leads the primary voltage by 30°, i.e. the eleven o’clock position leads the twelve o’clock position by 30°. The secondary star point is earthed. Yy0 would indicate 0° phase displacement between the primary and secondary red phases on a star/star transformer.

Dz6 would indicate a delta primary interconnected star secondary and 180° secondary-to-primary voltage vector phase displacement. The system designer will usually have to decide which vector grouping arrangement is required for each voltage level in the network.

There are many factors influencing the choice and good summaries of the factors of most interest to the manufacturer can be found in Ref. (1). From the user’s point of view, the following aspects will be important:

1. Vector displacement between the systems connected to each winding of the transformer and ability to achieve parallel operation.

2. Provision of a neutral earth point or points, where the neutral is referred to earth either directly or through an impedance. Transformers are used to give the neutral point in the majority of systems.

Clearly in Fig. 14.16 only the star or interconnected star (Z) winding configurations give a neutral location. If for various reasons, only delta windings are used at a particular voltage level on a particular system, a neutral point can still be provided by a purpose-made transformer called a ‘neutral earthing transformer’ or ‘earthing compensator transformer’ as shown in Fig. 14.16.

3. Practicality of transformer design and cost associated with insulation requirements. There may be some manufacturing difficulties with choosing certain winding configurations at certain voltage levels.

For example, the interconnected star configuration is bulky and expensive above about 33kV. Of considerable significance in transmission systems is the cost and location of the tap changer switchgear.

4. The Z winding reduces voltage unbalance in systems where the load is not equally distributed between phases, and permits neutral current loading with inherently low zero-sequence impedance. It is therefore often used for earthing transformers.
free counters