Boundary scan is a method for testing interconnects (thin wire lines) on printed circuit boards or sub-blocks inside an integrated circuit.
The Joint Test Action Group (JTAG) developed a specification for boundary scan testing that was standardized in 1990 as the IEEE Std. 1149.1-1990. In 1994, a supplement that contains a description of the Boundary Scan Description Language (BSDL) was added which describes the boundary-scan logic content of IEEE Std 1149.1 compliant devices. Since then, this standard has been adopted by electronic device companies all over the world. Boundary scan is nowadays mostly synonymous with JTAG.
The boundary scan architecture provides a means to test interconnects and clusters of logic, memories etc. without using physical test probes. It adds one or more so called 'test cells' connected to each pin of the device that can selectively override the functionality of that pin. These cells can be programmed via the JTAG scan chain to drive a signal onto a pin and across an individual trace on the board. The cell at the destination of the board trace can then be programmed to read the value at the pin, verifying the board trace properly connects the two pins. If the trace is shorted to another signal or if the trace has been cut, the correct signal value will not show up at the destination pin, and the board will be known to have a fault.
When performing boundary scan inside integrated circuits, cells are added between logical design blocks in order to be able to control them in the same manner as if they were physically independent circuits.
For normal operation, the added boundary scan latch cells are set so that they have no effect on the circuit, and are therefore effectively invisible. However, when the circuit is set into a test mode, the latches enable a data stream to be passed from one latch to the next. Once the complete data word has been passed into the circuit under test, it can be latched into place.
As the cells can be used to force data into the board, they can set up test conditions. The relevant states can then be fed back into the test system by clocking the data word back so that it can be analysed.
By adopting this technique, it is possible for a test system to gain test access to a board. As most of today’s boards are very densely populated with components and tracks, it is very difficult for test systems to access the relevant areas of the board to enable them to test the board. Boundary scan makes this possible.
Tradeoffs Abound in FPGA Design
Field-programmable gate arrays (FPGAs) arrived in 1984 as an alternative to programmable logic devices (PLDs) and ASICs. As their name implies, FPGAs offer the significant benefit of being readily programmable. Unlike their forebearers in the PLD category, FPGAs can (in most cases) be programmed again and again, giving designers multiple opportunities to tweak their circuits. There’s no large non-recurring engineering (NRE) cost associated with FPGAs. In addition, lengthy, nervewracking waits for mask-making operations are squashed. Often, with FPGA development, logic design begins to resemble software design due to the many iterations of a given design. Innovative design often happens with FPGAs as an implementation platform. But there are some downsides to FPGAs as well. The economics of FPGAs force designers to balance their relatively high piece-part pricing compared to ASICs with the absence of high NREs and long development cycles. They’re also available only in fixed sizes, which matters when you’re determined to avoid unused silicon area.
What are FPGAs?
FPGAs fill a gap between discrete logic and the smaller PLDs on the low end of the complexity scale and costly custom ASICs on the high end. They consist of an array of logic blocks that are configured using software. Programmable I/O blocks surround these logic blocks. Both are connected by programmable interconnects (Fig. 1). The programming technology in an FPGA determines the type of basic logic cell and the interconnect scheme. In turn, the logic cells and interconnection scheme determine the design of the input and output circuits as well as the programming scheme.
Just a few years ago, the largest FPGA was measured in tens of thousands of system gates and operated at 40 MHz. Older FPGAs often cost more than $150 for the most advanced parts at the time. Today, however, FPGAs offer millions of gates of logic capacity, operate at 300 MHz, can cost less than $10, and offer integrated functions like processors and memory (Table 1).
FPGAs offer all of the features needed to implement most complex designs. Clock management is facilitated by on-chip PLL (phase-locked loop) or DLL (delay-locked loop) circuitry. Dedicated memory blocks can be configured as basic single-port RAMs, ROMs, FIFOs, or CAMs. Data processing, as embodied in the devices’ logic fabric, varies widely. The ability to link the FPGA with backplanes, high-speed buses, and memories is afforded by support for various singleended and differential I/O standards. Also found on today’s FPGAs are system-building resources such as highspeed serial I/Os, arithmetic modules, embedded processors, and large amounts of memory.
Initially seen as a vehicle for rapid prototyping and emulation systems, FPGAs have spread into a host of applications. They were once too simple, and too costly, for anything but small-volume production. Now, with the advent of much larger devices and declining per-part costs, FPGAs are finding their way off the prototyping bench and into production (Table 2).
Do’s And Don’ts For The FPGA Designer
- Do concentrate on I/O timing, not just the register-to-register internal frequency that the FPGA place-and-route tools report. Frequently, the hardest challenge in a complete FPGA design is the I/O timing. Focus on how your signals enter and leave your FPGA, because that’s where the bottlenecks frequently occur.
- Do create hierarchy around vendor-specific structures and instantiations. Give yourself the freedom to migrate from one technology to another by ensuring that each instantiation of a vendor-specific element is in a separate hierarchical block. This applies especially to RAMs and clockmanagement blocks.
- Do use IP timing models during synthesis to give the true picture of your design. By importing EDIF netlists of pre-synthesized blocks, your synthesis tool can fully understand your timing requirements. Be cautious when using vendor cores that you can bring into your synthesis tool if they have no timing model.
- Do design your hierarchical blocks with registered outputs where possible to avoid having critical paths pass through many levels of hierarchy. FPGAs exhibit step-functions in logic-limited performance. When hierarchy is preserved and the critical path passes across a hierarchical boundary, you may introduce an extra level of logic. When considered along with the associated routing, this can add significant delay to your critical path.
- Do enable retiming in your synthesis tool. FPGAs tend to be register- rich architectures. When you correctly constrain your design in synthesis, you allow the tool to optimize your design to take advantage of positive slack timing within the design. Sometimes this can be done after initial place and route to improve retiming over wireload estimation.
- Don’t synthesize unless you’ve fully and correctly constrained your design. This includes correct clock domains, I/O timing requirements, multicycle paths, and false paths. If your synthesis tool doesn’t see exactly what you want, it can’t make decisions to optimize your design accordingly.
- Don’t try to fix every timing problem in place and route. Place and route offers little room for fixing timing where a properly constrained synthesis tool would.
- Don’t vainly floor plan at the RTL or block level hoping to improve place-and-route results. Manual area placement can cause more problems than it might initially appear to solve. Unless you are an expert in manual placement and floorplanning, this is best left alone.
- Don’t string clock buffers together, create multiple clock trees from the same clock, or use multiple clocks when a simple enable will do. Clocking schemes in FPGAs can become very complicated now that there are PLLs, DLLs, and large numbers of clock-distribution networks. Poor clocking schemes can lead to extended place-and-route times, failure to meet timing, and even failure to place in some technologies. Simpler schemes are vastly more desirable. Avoid those gated clocks, too!
- Don’t forget to simulate your design blocks as well as your entire design. Discovering and back-tracking an error from the chip’s pins during on-board testing can be extremely difficult. On-board FPGA testing can miss important design flaws that are much easier to identify during simulation; they can be rectified by modifying the FPGA’s programming.
Comparing FPGA Architectures
FPGAs must be programmed by users to connect the chip’s resources in the appropriate manner to implement the desired functionality. Over the years, various technologies have emerged to suit different requirements. Some FPGAs can only be programmed once. These devices employ antifuse technology. Flash-based devices can be programmed and reprogrammed again after debugging. Still others can be dynamically programmed thanks to SRAM-based technology. Each has its advantages and disadvantages (Table 3).
Most modern FPGAs are based on SRAM configuration cells, which offer the benefit of unlimited reprogrammability. When powered up, they can be configured to perform a given task, such as a board or system test, and then reprogrammed to perform their main task. On the flip side, though, SRAM-based FPGAs must be reconfigured each time their host system is powered up, and additional external circuitry is required to do so. Further, because the configuration file used to program the FPGA is stored in external memory, security issues concerning intellectual property emerge.
Antifuse-based FPGAs aren’t in-system programmable,but rather are programmed offline using a device programmer. Once the chip is configured, it can’t be altered.
However, in antifuse technology, device configuration is nonvolatile with no need for external memory. On top of that, it’s virtually impossible to reverse-engineer their programming. They often work as replacements for ASICs in small volumes. In a sense, flash-based FPGAs fulfill the promise of FPGAs in that they can be reprogrammed many times. They’re nonvolatile, retaining their configuration even when powered down. Programming is done either in-system or with a programmer. In some cases, IP security can be achieved using a multibit key that locks the configuration data after programming.
But flash-based FPGAs require extra process steps above and beyond standard CMOS technology, leaving them at least a generation behind. Moreover, the many pull-up resistors result in high static power consumption.
FPGAs can also be characterized as having either fine-, medium-, or coarse-grained architectures. Fine-grained architectures boast a large number of relatively simple logic blocks. Each logic block usually contains either a two-input logic function or a 4-to- 1 multiplexer and a flip-flop. Blocks can only be used to implement simple functions. But fine-grained architectures lend themselves to execution of functions that benefit from parallelism.
Coarse-grained architectures consist of relatively large logic blocks often containing two or more lookup tables and two or more flip-flops. In most of these architectures, a four-input lookup table (think of it as a 16 x 1 ROM) implements the actual logic.
The FPGA design flow After weighing all implementation options, you must consider the design flow. The process of implementing a design on an FPGA can be broken down into several stages, loosely definable as design entry or capture, synthesis, and place and route (Fig. 2). Along the way, the design is simulated at various levels of abstraction as in ASIC design. The availability of sophisticated and coherent tool suites for FPGA design makes them all the more attractive.
At one time, design entry was performed in the form of schematic capture. Most designers have moved over to hardware description languages (HDLs) for design entry. Some will prefer a mixture of the two techniques. Schematic-based design-capture tools gave designers a great deal of control over the physical placement and partitioning of logic on the device. But it’s becoming less likely that designers will take that route. Meanwhile, language-based design entry is faster, but often at the expense of performance or density.
For many designers, the choice of whether to use schematic- or HDL-based design entry comes down to their conception of their design. For those who think in software or algorithmic-like terms, HDLs are the better choice. HDLs are well suited for highly complex designs, especially when the designer has a good handle on how the logic must be structured. They can also be very useful for designing smaller functions when you haven’t the time or inclination to work through the actual hardware implementation
On the other hand, HDLs represent a level of abstraction that can isolate designers from the details of the hardware implementation. Schematic-based entry gives designers much more visibility into the hardware. It’s a better method for those who are hardware-oriented. The downside of schematic-based entry is that it makes the design more difficult to modify or port to another FPGA.
Athird option for design entry, state-machine entry,works well for designers who can see their logic design as a series of states that the system steps through. It shines when designing somewhat simple functions, often in the area of system control, that can be clearly represented in visual formats. Tool support for finite state-machine entry is limited, though.
Some designers approach the start of their design from a level of abstraction higher than HDLs, which is algorithmic design using the C/C++ programming languages. A number of EDA vendors have tool flows supporting this design style. Generally, algorithmic design has been thought of as a tool for architectural exploration. But increasingly, as tool flows emerge for C-level synthesis, it’s being accepted as a first step on the road to hardware implementation
After design entry, the design is simulated at the register-transfer level (RTL). This is the first of several simulation stages, because the design must be simulated at successive levels of abstraction as it moves down the chain toward physical implementation on the FPGA itself. RTL simulation offers the highest performance in terms of speed. As a result, designers can perform many simulation runs in an effort to refine the logic. At this stage, FPGA development isn’t unlike software development. Signals and variables are observed, procedures and functions traced, and breakpoints set. The good news is that it’s a very fast simulation. But because the design hasn’t yet been synthesized to gate level, properties such as timing and resource usage are still unknowns.
The next step following RTL simulation is to convert the RTL representation of the design into a bit-stream file that can be loaded onto the FPGA. The interim step is FPGA synthesis, which translates the VHDL or Verilog code into a device netlist format that can be understood by a bit-stream converter.
The synthesis process can be broken down into three steps. First, the HDL code is converted into device netlist format. Then the resulting file is converted into a hexadecimal bit-stream file, or .bit file. This step is necessary to change the list of required devices and interconnects into hexadecimal bits to download to the FPGA. Lastly, the .bit file is downloaded to the physical FPGA. This final step completes the FPGA synthesis procedure by programming the design onto the physical FPGA.
It’s important to fully constrain designs before synthesis (Fig. 3). A constraint file is an input to the synthesis process just as the RTL code itself. Constraints can be applied globally or to specific portions of the design. The synthesis engine uses these constraints to optimize the netlist. However, it’s equally important to not over-constrain the design, which will generally result in less-than-optimal results from the next step in the implementation process—physical device placement— and interconnect routing. Synthesis constraints soon become place-and-route constraints.
This traditional flow will work, but it can lead to numerous iterations before achieving timing closure. Some EDA vendors have incorporated more modern physical synthesis techniques, which automate device re-timing by moving lookup tables (LUTs) across registers to balance out timing slack. Physical synthesis also anticipates place and route to leverage delay information.
Following synthesis, device implementation begins.After netlist synthesis, the design is automatically converted into the format supported internally by the FPGA vendor’s place-and-route tools. Designrule checking and optimization is performed on the incoming netlist and the software partitions the design onto the available logic resources. Good partitioning is required to achieve high routing completion and high performance.
Increasingly, FPGA designers are turning to floorplanning after synthesis and design partitioning. FPGA floorplanners work from the netlist hierarchy as defined by the RTL coding. Floorplanning can help if area is tight. When possible, it’s a good idea to place critical logic in separate blocks.
After partitioning and floorplanning, the placement tool tries to place the logic blocks to achieve efficient routing. The tool monitors routing length and track congestion while placing the blocks. It may also track the absolute path delays to meet the user’s timing constraints. Overall, the process mimics PCB place and route.
Functional simulation is performed after synthesis and before physical implementation. This step ensures correct logic functionality. After implementation, there’s a final verification step with full timing information. After placement and routing, the logic and routing delays are back-annotated to the gate-level netlist for this final simulation. At this point, simulation is a much longer process, because timing is also a factor (Fig. 4). Often, designers substitute static timing analysis for timing simulation. Static timing analysis calculates the timing of combinational paths between registers and compares it against the designer’s timing constraints.
Once the design is successfully verified and found to meet timing, the final step is to actually program the FPGA itself. At the completion of placement and routing, a binary programming file is created. It’s used to configure the device. No matter what the device’s underlying technology, the FPGA interconnect fabric has cells that configure it to connect to the inputs and outputs of the logic blocks. In turn, the cells configure those logic blocks to each other. Most programmable- logic technologies, including the PROMs for SRAMbased FPGAs, require some sort of a device programmer. Devices can also be programmed through their configuration ports using a set of dedicated pins.
Modern FPGAs also incorporate a JTAG port that,happily, can be used for more than boundary-scan testing. The JTAG port can be connected to thedevice’s internal SRAM configuration-cell shift register,which in turn can be instructed to connect to the chip’s JTAG scan chain.
If you’ve gotten this far with your design, chances are you have a finished FPGA. There’s one more step to the process, however, which is to attach the device to a printed-circuit board in a system. The appearance of 10-Gbit/s serial transmitters, or I/Os, on the chip, coupled with packages containing as many as 1500 pins, makes the interface between the FPGA and its intended system board a very sticky issue. All too often, an FPGA is soldered to a pc board and it doesn’t function as expected or, worse, it doesn’t function at all. That can be the result of errors caused by manual placement of all those pins, not to mention the board-level timing issues created by a complex FPGA.
More than ever, designers must strongly consider an integrated flow that takes them from conception of the FPGA through board design. Such flows maintain complete connectivity between the system-level design and the FPGA; they also do so between design iterations. Not only do today’s integrated FPGAto- board flows create the schematic connectivity needed for verification and layout of the board, but they also document which signal connections are made to which device pins and how these map to the original board-level bus structures.
Integrated flows for FPGAs make sense in general, considering that FPGA vendors will continue to introduce more complex, powerful, and economical devices over time. An integrated third-party flow makes it easier to re-target a design to different technologies from different vendors as conditions warrant.
David Maliniak, Electronic Design Automation Editor
Sponsored by Mentor Graphics Corp.
The Intel HEX file is an ASCII text file with lines of text that follow the Intel HEX file format. Each line in an Intel HEX file contains one HEX record. These records are made up of hexadecimal numbers that represent machine language code and/or constant data. Intel HEX files are often used to transfer the program and data that would be stored in a ROM or EPROM. Most EPROM programmers or emulators can use Intel HEX files.
An Intel HEX file is composed of any number of HEX records. Each record is made up of five fields that are arranged in the following format:
Each group of letters corresponds to a different field, and each letter represents a single hexadecimal digit. Each field is composed of at least two hexadecimal digits-which make up a byte-as described below:
- : is the colon that starts every Intel HEX record.
- ll is the record-length field that represents the number of data bytes (dd) in the record.
- aaaa is the address field that represents the starting address for subsequent data in the record.
- tt is the field that represents the HEX record type, which may be one of the following:
- 00 - data record
- 01 - end-of-file record
- 02 - extended segment address record
- 04 - extended linear address record
- dd is a data field that represents one byte of data. A record may have multiple data bytes. The number of data bytes in the record must match the number specified by the ll field.
- cc is the checksum field that represents the checksum of the record. The checksum is calculated by summing the values of all hexadecimal digit pairs in the record modulo 256 and taking the two's complement.
The Intel HEX file is made up of any number of data records that are terminated with a carriage return and a linefeed. Data records appear as follows:
This record is decoded as follows:
- 10 is the number of data bytes in the record.
- 2462 is the address where the data are to be located in memory.
- 00 is the record type 00 (a data record).
- 464C...464C is the data.
- 33 is the checksum of the record.
Extended Linear Address Records (HEX386)
Extended linear address records are also known as 32-bit address records and HEX386 records. These records contain the upper 16 bits (bits 16-31) of the data address. The extended linear address record always has two data bytes and appears as follows:
- 02 is the number of data bytes in the record.
- 0000 is the address field. For the extended linear address record, this field is always 0000.
- 04 is the record type 04 (an extended linear address record).
- FFFF is the upper 16 bits of the address.
- FC is the checksum of the record and is calculated as
- 01h + NOT(02h + 00h + 00h + 04h + FFh + FFh).
When an extended linear address record is read, the extended linear address stored in the data field is saved and is applied to subsequent records read from the Intel HEX file. The linear address remains effective until changed by another extended address record.
The absolute-memory address of a data record is obtained by adding the address field in the record to the shifted address data from the extended linear address record. The following example illustrates this process..
Address from the data record's address field 2462
Extended linear address record data field FFFF
Absolute-memory address FFFF2462
Extended Segment Address Records (HEX86)
Extended segment address records-also known as HEX86 records-contain bits 4-19 of the data address segment. The extended segment address record always has two data bytes and appears as follows:
- 02 is the number of data bytes in the record.
- 0000 is the address field. For the extended segment address record, this field is always 0000.
- 02 is the record type 02 (an extended segment address record).
- 1200 is the segment of the address.
- EA is the checksum of the record and is calculated as
- 01h + NOT(02h + 00h + 00h + 02h + 12h + 00h).
When an extended segment address record is read, the extended segment address stored in the data field is saved and is applied to subsequent records read from the Intel HEX file. The segment address remains effective until changed by another extended address record.
The absolute-memory address of a data record is obtained by adding the address field in the record to the shifted-address data from the extended segment address record. The following example illustrates this process.
Address from the data record's address field 2462
Extended segment address record data field 1200
Absolute memory address 00014462
End-of-File (EOF) Records
An Intel HEX file must end with an end-of-file (EOF) record. This record must have the value 01 in the record type field. An EOF record always appears as follows:
- 00 is the number of data bytes in the record.
- 0000 is the address where the data are to be located in memory. The address in end-of-file records is meaningless and is ignored. An address of 0000h is typical.
- 01 is the record type 01 (an end-of-file record).
- FF is the checksum of the record and is calculated as
- 01h + NOT(00h + 00h + 00h + 01h).
Example Intel HEX File
Following is an example of a complete Intel HEX file:
In 1972, Intel launched the 8008, the first 8-bit microprocessor. It implemented an instruction set designed by Datapoint corporation with programmable CRT terminals in mind, that also proved to be fairly general purpose. The device needed several additional ICs to produce a functional computer, in part due to its small 18-pin "memory-package" which prevented a separate address bus (Intel was primarily a DRAM manufacturer at the time).
Two years later, in 1974, Intel launched the 8080, employing the new 40-pin DIL packages originally developed for calculator ICs to enable a separate address bus. It had an extended instruction set that was source- (not binary-) compatible with the 8008 and also included some 16-bit instructions to make programming easier. The 8080 device, often described as the first truly useful microprocessor, was nonetheless soon replaced by the 8085 which could cope with a single 5V power supply instead of the three voltages of earlier chips. Other well known 8-bit microprocessors that emerged during these years were Motorola 6800 (1974), Microchip PIC16X (1975), MOS Technology 6502 (1975), Zilog Z80 (1976), and Motorola 6809 (1977), as well as others.
The first x86 design
The 8086 was originally intended as a temporary substitute for the ambitious iAPX 432 project in an attempt to draw attention from the less-delayed 16 and 32-bit processors of other manufacturers (such as Motorola, Zilog, and National Semiconductor) and at the same time to top the successful Z80 (designed by former Intel employees). Both the architecture and the physical chip were therefore developed quickly (in a little more than two years), using the same basic microarchitecture elements and physical implementation techniques as employed for the one year earlier 8085, which it would also function as a continuation of. Marketed as source compatible, it was designed so that assembly language for the 8085, 8080, or 8008 could be automatically converted into equivalent (sub-optimal) 8086 source code, with little or no hand-editing. This was possible because the programming model and instruction set was (loosely) based on the 8085. However, the 8086 design was expanded to support full 16-bit processing, instead of the fairly basic 16-bit capabilities of the 8080/8085. New kinds of instructions were added as well; self-repeating operations and instructions to better support nested ALGOL-family languages such as Pascal, among others.
The 8086 was sequenced using a mix of random logic and microcode and was implemented using depletion load nMOS circuitry with approximately 20,000 active transistors (29,000 counting all ROM and PLA sites). It was soon moved to a new refined nMOS manufacturing process called HMOS (for High performance MOS) that Intel originally developed for manufacturing of fast static RAM products. This was followed by HMOS-II, HMOS-III, and eventually a CMOS version. The original chip measured 33 mm² and minimum feature size was 3.2 μm.
The architecture was defined by Stephen P. Morse and Bruce Ravenel. Peter A.Stoll was lead engineer of the development team and William Pohlman the manager. While less known than the 8088 chip, the legacy of the 8086 is enduring; references to it can still be found on most modern computers in the form of the Vendor ID entry for all Intel devices, which is 8086H (hexadecimal). It also lent its last two digits to Intel's later extended versions of the design, such as the 286 and the 386, all of which eventually became known as the x86 family.
Buses and operation
All internal registers as well as internal and external data buses were 16 bits wide, firmly establishing the "16-bit microprocessor" identity of the 8086. A 20-bit external address bus gave an 1 MB (segmented) physical address space (220 = 1,048,576). The data bus was multiplexed with the address bus in order to fit a standard 40-pin dual in-line package. 16-bit I/O addresses meant 64 KB of separate I/O space (216 = 65,536). The maximum linear address space were limited to 64 KB, simply because internal registers were only 16 bits wide. Programming over 64 KB boundaries involved adjusting segment registers (see below) and were therefore fairly awkward (and remained so until the 80386).
Some of the control pins, which carry essential signals for all external operations, had more than one function depending upon whether the device was operated in "min" or "max" mode. The former were intended for small single processor systems whilst the latter were for medium or large systems, using more than one processor.
Registers and instructions
The 8086 had eight (more or less general) 16-bit registers including the stack pointer, but excluding the instruction pointer, flag register and segment registers. Four of them (AX,BX,CX,DX) could also be accessed as (twice as many) 8-bit registers (AH,AL,BH,BL, etc), the other four (BP,SI,DI,SP) were 16-bit only .
Due to a compact encoding inspired by 8085 and other 8-bit processors, most instructions were one-address or two-address operations which means that the result were stored in one of the operands. At most one of the operands could be in memory, but this memory operand could also be the destination, while the other operand, the source, could be either register or immediate. A single memory location could also often be used as both source and destination which, among other factors, further contributed to a code density comparable to (often better than) most eight bit machines.
Although the degree of generality of most registers were much greater than in the 8080 or 8085, it was still fairly low compared to the typical contemporary minicomputer, and registers were also sometimes used implicitly by instructions. While perfectly sensible for the assembly programmer, this complicated register allocation for compilers compared to more regular 16- and 32-bit processors (such as the PDP-11, VAX, 68000, etc); on the other hand, compared to contemporary 8-bit microprocessors (such as the 8085, or 6502), it was significantly easier to generate code for the 8086 design.
As mentioned above 8086 also featured 64 KB of 8-bit (or alternatively 32 K-word or 16-bit) I/O space. A 64 KB (one segment) stack growing towards lower addresses is supported by hardware; 2-byte words are pushed to the stack and the stack top (bottom) is pointed out by SS:SP. There are 256 interrupts, which can be invoked by both hardware and software. The interrupts can cascade, using the stack to store the return address.
The processor had some new instructions (not present in the 8085) to better support stack based high level programming languages such as Pascal and PL/M; some of the more useful ones were push mem-op, and ret size, supporting the "pascal calling convention". (Several others, such as push immed and enter, would be added in the subsequent 80186, 80286, and 80386 designs.)
There were also four sixteen-bit segment registers (CS, DS, SS, ES, standing for "code segment", "data segment", "stack segment" and "extra segment") that allowed the CPU to access one megabyte of memory in an unusual way. Rather than concatenating the segment register with the address register, as in most processors whose address space exceeded their register size, the 8086 shifted the segment register left 4 bits and added it to the offset address (physical address = 16·segment + offset), producing a 20-bit effective address from the 32-bit segment:offset pair. As a result, each physical address could be referred to by 212 = 4096 different segment:offset pairs. This scheme had the advantage that a small program (less than 64 kilobytes) could be loaded starting at a fixed offset (such as 0) in its own segment, avoiding the need for relocation, with at most 15 bytes of alignment waste. The 16-byte separation between segment bases was known as a "paragraph".
Compilers for the 8086 commonly supported two types of pointer, "near" and "far". Near pointers were 16-bit addresses implicitly associated with the program's code or data segment (and so made sense only in programs small enough to fit in one segment). Far pointers were 32-bit segment:offset pairs. C compilers also supported "huge" pointers, which were like far pointers except that pointer arithmetic on a huge pointer treated it as a flat 20-bit pointer, while pointer arithmetic on a far pointer wrapped around within its initial 64-kilobyte segment.
To avoid the need to specify "near" and "far" on every pointer and every function which took or returned a pointer, compilers also supported "memory models" which specified default pointer sizes. The "small", "compact", "medium", and "large" models covered every combination of near and far pointers for code and data. The "tiny" model was like "small" except that code and data shared one segment. The "huge" model was like "large" except that all pointers were huge instead of far by default. Precompiled libraries often came in several versions compiled for different memory models.
In principle the address space of the x86 series could have been extended in later processors by increasing the shift value, as long as applications obtained their segments from the operating system and did not make assumptions about the equivalence of different segment:offset pairs. In practice the use of "huge" pointers and similar mechanisms was widespread, and the shift value was never changed.
According to Morse et al, the designers of the 8086 considered using a shift of eight bits instead of four, which would have given the processor a 16-megabyte address space.
1.What are the flags in 8086?
- In 8086 Carry flag, Parity flag, Auxiliary carry flag, Zero flag, Overflow flag, Trace flag, Interrupt flag, Direction flag, and Sign flag.
2.What are the various interrupts in 8086?
- Maskable interrupts, Non-Maskable interrupts.
3.What is meant by Maskable interrupts?
- An interrupt that can be turned off by the programmer is known as Maskable interrupt.
4.What is Non-Maskable interrupts?
An interrupt which can be never be turned off (ie.disabled) is known as Non-Maskable interrupt.
5.Which interrupts are generally used for critical events?
- Non-Maskable interrupts are used in critical events. Such as Power failure, Emergency, Shut off etc.,
6.Give examples for Maskable interrupts?
- RST 7.5, RST6.5, RST5.5 are Maskable interrupts
7.Give example for Non-Maskable interrupts?
- Trap is known as Non-Maskable interrupts, which is used in emergency condition.
8.What is the Maximum clock frequency in 8086?
- 5 Mhz is the Maximum clock frequency in 8086.
9.What are the various segment registers in 8086?
- Code, Data, Stack, Extra Segment registers in 8086.
10.Which Stack is used in 8086?
- FIFO (First In First Out) stack is used in 8086.In this type of Stack the first stored information is retrieved first.
11.What are the address lines for the software interrupts? -
12.What is SIM and RIM instructions?
- SIM is Set Interrupt Mask. Used to mask the hardware interrupts. RIM is Read Interrupt Mask. Used to check whether the interrupt is Masked or not.
13.Which is the tool used to connect the user and the computer?
- Interpreter is the tool used to connect the user and the tool.
14.What is the position of the Stack Pointer after the PUSH instruction?
- The address line is 02 less than the earlier value.
15.What is the position of the Stack Pointer after the POP instruction?
- The address line is 02 greater than the earlier value.
16.Logic calculations are done in which type of registers?
- Accumulator is the register in which Arithmetic and Logic calculations are done.
17.What are the different functional units in 8086?
- Bus Interface Unit and Execution unit, are the two different functional units in 8086.
18.Give examples for Micro controller?
- Z80, Intel MSC51 &96, Motorola are the best examples of Microcontroller.
19.What is meant by cross-compiler?
- A program runs on one machine and executes on another is called as cross-compiler.
20.What are the address lines for the hardware interrupts? -
21.Which Segment is used to store interrupt and subroutine return address registers?
- Stack Segment in segment register is used to store interrupt and subroutine return address registers.
22.Which Flags can be set or reset by the programmer and also used to control the operation of the processor?
- Trace Flag, Interrupt Flag, Direction Flag.
23.What does EU do?
- Execution Unit receives program instruction codes and data from BIU, executes these instructions and store the result in general registers.
24.Which microprocessor accepts the program written for 8086 without any changes?
- 8088 is that processor.
25.What is the difference between 8086 and 8088?
- The BIU in 8088 is 8-bit data bus & 16- bit in 8086.Instruction queue is 4 byte long in 8088and 6 byte in 8086.
These projects were produced in the last month of ECE 4760 each spring. The students were given the responsibility of choosing their project, then designing and building it. The microcontroller used is the Atmel ATmega series.
It is a very good reference site
It is a very good reference site
Power Supplies /Regulators
High Voltage Circuits
"First Web Page of Radio"
Weather Related Circuits
- Audio Amplifier Circuits
- Very Low Noise Amplifier (designed for a phase noise test set)
- Induction Receiver
- White and Pink (1/F) Noise Source
- VLF Whistler Reception See also ELF Monitor with AGC and Hum Removal [Area 50]
- Noisemaking Circuits (police siren and doorbell chime)
- Telephone Circuits
- 24 Line Parallel Interface for the PC (or just the schematic)
- Serial Port A/D - 12 bits, no power, cheap!
- Games - the first game show timer was modified for better performance
- Noisemaking Circuits (police siren and chime)
- Temperature Controlled Ovens
- Christmas Tree (or other plant) Moisture Monitor
- Kitchen Circuits
- Galloping Neons (array of blinking neon lamps)
- Deer Repellent (seismic sensor)
- Shower Stopper
- Gag Circuits: magic lamp, dripper, remote control jammer
- Construction Ideas
- Two-Transistor Atomic Frequency Standard
- Audio Oscillators
- Flip-Flop Flashers, Buzzers, Etc.
- Flasher Ideas (LED flasher, silent metronome, low battery indicator, high voltage generator)
- Tiny Crystal Oscillator High-stability gate oscillator uses two, 5-pin SOT packaged devices.
- Low Distortion Crystal Oscillator
- Anti-Flicker Circuit for LED Christmas Lights [Area 50]
- Magic Lamp (Old idea, new twist.)
- Line Powered Flashers (Great for Christmas lights, etc.)
- Stepper Motor Experiment
- Dome Lamp Dimmer
- Emergency Backup Lamp
- Battery Saver (Worth the effort!)
- Windshield Wiper Delay
- Lantern Flasher/Dimmer
- Dimmer/ Motor Speed Control
Power Supplies /Regulators
- Split Supply for Op-Amps
- Mercury Battery Replacement (also see Craig's versions)
- Homemade "B" Battery
High Voltage Circuits
- Battery Chargers
- Power Supplies and Regulators (new low power discrete switcher)
- Cockcroft-Walton Diode Voltage Multipliers
- Battery Discharge Monitor
- Finesse Regulator Noise
- World' Smallest Geiger Counter (Area 50)
- Top of Ion Chambers page
- Cookie Tin Ion Chamber
- Ion Chamber Bias Supply (Battery Topper)
- Cheap but Sensitive Radiation Detector (Even Cheaper and Simpler version)
- Neon Lamp as Resistor
- Nuclear War Detector
- Polonium Pen
- Neon Bulb Quench Experiment
- Geiger Counters
"First Web Page of Radio"
- Crystal Radio and Amplifiers and simple AM transistor radio circuits.(Also see diode detector info.
- Regenerative Receivers
- Reflex Receivers
- The Amazing All-Band Receiver
- Cellular Phone Helper
- Aircraft Receiver
- WWV Converter for the Car
- Rush Converter for the Car
- FM Band Preamplifier
- Antennas Includes longwire, loading coil, active ferrite antenna, active short antenna, vertical.
- FM Band Monaural Transmitter
- Micropower AM band radio station and wireless "phono oscillator"
- Superregenerative Air Band Radio (Area 50)
- FM Radio Microphone with Internal Loop Antenna (Area 50)
- One Watt Class-C Amplifier
- Two Diode Odd-Order Multiplier
- Diode Frequency Doublers
- unusual Frequency Dividers
- HCMOS Gates Make Frequency Multipliers
- Frequency Tripler Using the CA3028
- Simple Divide-by-N Using '161s
- Waveform Conversion (sine to square and square to sine)
- Test Equipment
- Picoampere Leakage Tester
- Transistor Spot Checker
- V/I Curve Tracer
- Measuring RF Voltages with a VOM
- Synthetic Isolators - great for checking the SWR of antennas, amplifiers, mixers, etc.
- Curious C-Beeper
- Remarkable R-Beeper - by Tom Bruhns
Weather Related Circuits
- Lightning Detectors (also see the readers' versions)
- Rain Detector
- Geiger counters! (weather?) or Ion Chambers
- Cloud Charge Monitor
- Another One-Transistor Cloud Charge Monitor
This website contains numerous information sheets and exercises to enhance the study, understanding and teaching of DESIGN and TECHNOLOGY.
- Design Process
- Technology and environment
- Power tools
- Website links
- PCB work
- Gears and pulleys
- Resistant Materials
- Product design
- PIC controllers
- Computer Controll
- Technology and Cultures
- Equipment and Processes
COMPONENTS / SYMBOLS
- Electronic Components and Symbols - 1
- Electronic Components and Symbols - 2
- Electronic Components and Symbols - 3
- Symbols Summary Sheet
- Lesson Starter - Electronic Components
- Lesson Starter - Electronic Tools and Equipment
- Basic Electronics - Wordsearch Exercise
COMPONENT AND CIRCUIT DETAILS
- Batteries and LEDs
- Button Batteries / Coin Cells - Simple Circuits
- Switches -1
- Switches - 2
- Switches - 3
- Incandescent Lamps (Bulbs) 1
- Incandescent Lamps (Bulbs) 2
- Series Circuits
- Circuits in Parallel
- A Series / Parallel Circuit
- Basic Circuit using Crocodile Technology®
- Circuit Simulation Software
- The Diode
- Using Crocodile Technology® to Test the
- Relationship between Resistors and Capacitors
- Test Instruments
- Light Dependent Resistors
- The Preset Resistor
- Making a Light / Dark Sensor
- Sequence Drawing - Light/Dark Sensor
- LDR Examination Question - 1
- LDR Examination Question - 2
- LDR Examination Question - 3
- Sensor Question - LDR
- Switches, Diode, Transistor and General Circuit Questions
- The Thermistor
- Information Regarding Thermistor Circuits and Darlington Pairs
- A Typical Temperature Sensor
- Potential Dividers
- SI Units and OHM's Law
- Resistors - Questions
- Relays and Practical Circuits
- The Basics on Transistors
- Transistors - Darlington Pairs
- Transistor Exam Question
- Transistor Breadboard Project
- Transistor Darlington Pair Breadboard Project
- Transistor Formulas and Calculations
- Dual Transistor Multivibrator Circuit
- Lesson Starter - Astable Transistor Circuit (Multivibrator)
- Examination Questions : Dual Transistor Multivibrator Circuits
- The Thyristor
- Steady Hand Game - Thyristor Circuit
- Examination Questions - Jigs
- Analogue / Digital Systems
- Control Systems
- Control Systems - An Example
- Modular Electronics
- Modular Electronics - Control Studio® -1
- Modular Electronics - Control Studio® -2
- Modular Electronics - Control Studio® -3
DIGITAL ELECTRONICS AND LOGIC GATES
- Digital Electronics and Logic Circuits - 1
- Digital Electronics and Logic Gates - 2 (Role of Transistors)
- Basic Logic Gates and Logic Tables
- Alternative representations of Logic Tables
- Example Logic Circuit - 1
- Example Logic Circuit - 2
- The 4081B Integrated Circuit (AND gate) - a Practical Example.
- The 4081B - Circuit Design
- Digital Logic Circuit Exam Question 1
- Digital Logic Circuit Exam Question 2
- Digital Logic Circuit Exam Question 3
- AND Gate Sample Question
- AND Gate Sample Answer
- The Binary System
- Integrated Circuits - 1
- Integrated Circuits - 2
- Integrated Circuits - 3
THE 555 INTEGRATED CIRCUIT
- The 555 Integrated Circuit (Timer) - a Simple Explanation
- The 555 IC as an ASTABLE Circuit
- The 555 IC as a MONOSTABLE Circuit
- 555 MONOSTABLE Examples
- The MONOSTABLE Circuit in more detail
- Animation 1 - 555 Timer Circuit - Frames Created by Crocodile Technology 3D
- Animation 2 - 555 Timer Circuit - Frames Created by Crocodile Technology 3D
- Monostable Timer Examination Questions - 1
- Monostable Timer Examination Questions - 2
- ASTABLE 555 Breadboard Project
- 555 ASTABLE Examples
- The ASTABLE Circuit in more detail
THE 741 OPERATIONAL AMPLIFIER
- The Operation Amplifier used as an Amplifier - a Simple Explanation
- The 741 Operational Amplifier used as an Amplifier with Sensors
- The 741 Operational Amplifier - an introduction
- Inverting and Non-inverting 741 Amplifiers
- The 741 Operational Amplifier as a Comparator
- 741 Operational Amplifier Comparator Examination Questions and Information
THE 4017B DECADE COUNTER
- How the 4017B decade Counter Works
- Adding a Motor and a Solenoid to the 4017B Decade Counter