CHAPTER 1 INTRODUCTION
1. INTRODUCTION TO VLSI TECHNOLOGY:
Journal provides a dynamic high-quality international forum for original papers and tutorials by academic, industrial, and other scholarly contributors in VLSI Design.
The development of microelectronics spans a time which is even lesser than the average life expectancy of a human, and yet it has seen as many as four generations. Early 60’s saw the low density fabrication processes classified under Small Scale Integration (SSI) in which transistor count was limited to about 10. This rapidly gave way to Medium Scale Integration in the late 60’s when around 100 transistors could be placed on a single chip.
It was the time when the cost of research began to decline and private firms started entering the competition in contrast to the earlier years where the main burden was borne by the military. Transistor-Transistor logic (TTL) offering higher integration densities outlasted other IC families like ECL and became the basis of the first integrated circuit revolution. It was the production of this family that gave impetus to semiconductor giants like Texas Instruments, Fairchild and National Semiconductors. Early seventies marked the growth of transistor count to about 1000 per chip called the Large-Scale Integration.
By mid-eighties, the transistor counts on a single chip had already exceeded 1000 and hence came the age of Very Large Scale Integration or VLSI. Though many improvements have been made and the transistor count is still rising, further names of generations like ULSI are generally avoided. It was during this time when TTL lost the battle to MOS family owing to the same problems that had pushed vacuum tubes into negligence, power dissipation and the limit it imposed on the number of gates that could be placed on a single die.
The second age of Integrated Circuits revolution started with the introduction of the first microprocessor, the 4004 by Intel in 1972 and the 8080 in 1974. Today many companies like Texas Instruments, Infineon, Alliance Semiconductors, Cadence, Synopsys, Celox Networks, Cisco, Micron Tech, National Semiconductors, ST Microelectronics, Qualcomm, Lucent, Mentor Graphics, Analog Devices, Intel, Philips, Motorola and many other firms have been established and are dedicated to the various fields in "VLSI" like Programmable Logic Devices, Hardware Descriptive Languages, Design tools, Embedded Systems etc.
In 1980’s hold-over from outdated taxonomy for integration levels. Obviously influenced from frequency bands, i.e. HF, VHF, and UHF. Sources disagree on what is measured (gates or transistors)
SSI – Small-Scale Integration (0-102)
MSI – Medium-Scale Integration (102 -103)
LSI – Large-Scale Integration (103 -105)
VLSI – Very Large-Scale Integration (105 - 107) ULSI – Ultra Large-Scale Integration (>= 107)
VLSI Technology, Inc. was a company which designed and manufactured custom and semi-custom ICs. The company was based in Silicon Valley, with headquarters at 1109 McKay Drive in San Jose, California. Along with LSI Logic, VLSI Technology defined the leading edge of the application-specific integrated circuit (ASIC) business, which accelerated the push of powerful embedded systems into affordable products. The company was founded in 1979 by a trio from Fairchild Semiconductor by way of Synertek - Jack Ballet to, Dan Floyd, and Gunnar Wetlesen
- and by Doug Fairbairn of Xerox PARC and Lambda (later VLSI Design) magazine. Alfred J. Stein became the CEO of the company in 1982. Subsequently VLSI built its first fab in San Jose; eventually a second fab was built in San Antonio, Texas. VLSI had its initial public offering in 1983 and was listed on the stock market as (NASDAQ: VLSI). The company was later acquired by Philips and survives to this day as part of NXP Semiconductors. The first semiconductor chips held two transistors each. Subsequent advances added more and more transistors, and, as a consequence, more individual functions or systems were integrated over time. The
first integrated circuits held only a few devices, perhaps as many as ten diodes, transistors, resistors and capacitors, making it possible to fabricate one or more logic gates on a single device. Now known retrospectively as small-scale integration (SSI), improvements in technique led to devices with hundreds of logic gates, known as medium-scale integration (MSI). Further improvements led to large-scale integration (LSI), i.e. systems with at least a thousand logic gates. Current technology has moved far past this mark and today's microprocessors have many millions of gates and billions of individual transistors.
At one time, there was an effort to name and calibrate various levels of large- scale integration above VLSI. Terms like ultra-large-scale integration (ULSI) were used. But the huge number of gates and transistors available on common devices has rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use.
As of early 2008, billion-transistor processors are commercially available. This is expected to become more commonplace as semiconductor fabrication moves from the current generation of 65 nm processes to the next 45 nm generations (while experiencing new challenges such as increased variation across process corners). A notable example is NVidia’s 280 series GPU. This GPU is unique in the fact that almost all of its 1.4 billion transistors are used for logic, in contrast to the Itanium, whose large transistor count is largely due to its 24 MB L3 cache. Current designs, as opposed to the earliest devices, use extensive design automation and automated logic synthesis to lay out the transistors, enabling higher levels of complexity in the resulting logic functionality. Certain high-performance logic blocks like the SRAM (Static Random-Access Memory) cell, however, are still designed by hand to ensure the highest efficiency (sometimes by bending or breaking established design rules to obtain the last bit of performance by trading stability) [citation needed]. VLSI technology is moving towards radical level miniaturization with introduction of NEMS technology. A lot of problems need to be sorted out before the transition is actually made.
WHY VLSI?
Integration improves the design, lowers the parasitic, which means higher speed and lower power consumption and physically smaller. The Integration reduces manufacturing cost - (almost) no manual assembly.
The course will cover basic theory and techniques of digital VLSI design in CMOS technology. Topics include: CMOS devices and circuits, fabrication processes, static and dynamic logic structures, chip layout, simulation and testing, low power techniques, design tools and methodologies, VLSI architecture. We use full-custom techniques to design basic cells and regular structures such as data-path and memory. There is an emphasis on modern design issues in interconnect and clocking. We will also use several case-studies to explore recent real-world VLSI designs (e.g. Pentium, Alpha, PowerPC Strong ARM, etc.) and papers from the recent research literature. On-campus students will design small test circuits using various CAD tools. Circuits will be verified and analyzed for performance with various simulators. Some final project designs will be fabricated and returned to students the following semester for testing.
Very-large-scale integration (VLSI) is the process of creating integrated circuits by combining thousands of transistor-based circuits into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have increased in complexity into the hundreds of millions of transistors.
The first semiconductor chips held one transistor each. Subsequent advances added more and more transistors, and, as a consequence, more individual functions or systems were integrated over time. The first integrated circuits held only a few devices, perhaps as many as ten diodes, transistors, resistors and capacitors, making it possible to fabricate one or more logic gates on a single device. Now known retrospectively as "small-scale integration" (SSI), improvements in technique led to devices with hundreds of logic gates, known as large-scale integration (LSI), i.e. systems with at least a thousand logic gates. Current technology has moved far past this mark and today's microprocessors have many millions of gates and hundreds of millions of individual transistors. At one time, there was an effort to name and
calibrate various levels of large-scale integration above VLSI. Terms like Ultra-large- scale Integration (ULSI) were used. But the huge number of gates and transistors available on common devices has rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use. Even VLSI is now somewhat quaint, given the common assumption that all microprocessors are VLSI or better. As of early 2008, billion-transistor processors are commercially available, an example of which is Intel's Montecito Itanium chip. This is expected to become more commonplace as semiconductor fabrication moves from the current generation of 65 nm processes to the next 45 nm generations (while experiencing new challenges such as increased variation across process corners). Another notable example is NVIDIA’s 280 series GPU. This microprocessor is unique in the fact that its 1.4 Billion transistor count, capable of a teraflop of performance, is almost entirely dedicated to logic (Itanium's transistor count is largely due to the 24MB L3 cache). Current designs, as opposed to the earliest devices, use extensive design automation and automated logic synthesis to lay out the transistors, enabling higher levels of complexity in the resulting logic functionality. Certain high- performance logic blocks like the SRAM cell, however, are still designed by hand to ensure the highest efficiency (sometimes by bending or breaking established design rules to obtain the last bit of performance by trading stability). The original business plan was to be a contract wafer fabrication company, but the venture investors wanted the company to develop IC (Integrated Circuit) design tools to help fill the foundry. Thanks to its Caltech and UC Berkeley students, VLSI was an important pioneer in the electronic design automation industry. It offered a sophisticated package of tools, originally based on the 'lambda-based' design style advocated by Carver Mead and Lynn Conway. VLSI became an early vendor of standard cell (cell-based technology) to the merchant market in the early 80s where the other ASIC-focused company, LSI Logic, was a leader in gate arrays. Prior to VLSI's cell-based offering, the technology had been primarily available only within large vertically integrated companies with semiconductor units such as AT&T and IBM.
VLSI's design tools eventually included not only design entry and simulation but eventually cell-based routing (chip compiler), a data path compiler, SRAM and ROM compilers and a state machine compiler. The tools were an integrated design solution for IC design and not just point tools, or more general-purpose system tools.
A designer could edit transistor-level polygons and/or logic schematics, then run DRC and LVS, extract parasites from the layout and run Spice simulation, then back- annotate the timing or gate size changes into the logic schematic database. Characterization tools were integrated to generate Frame Maker Data Sheets for Libraries. VLSI eventually spun off the CAD and Library operation into Compass Design Automation, but it never reached IPO before it was purchased by Avanti Corp.
VLSI's physical design tools were critical not only to its ASIC business, but also in setting the bar for the commercial EDA industry. When VLSI and its main ASIC competitor, LSI Logic, were establishing the ASIC industry, commercially- available tools could not deliver the productivity necessary to support the physical design of hundreds of ASIC designs each year without the deployment of a substantial number of layout engineers. The companies' development of automated layout tools was a rational "make because there's nothing to buy" decision. The EDA industry finally caught up in the late 1980s when Tangent Systems released its Tan Cell and Tan Gate products. In 1989, Tangent was acquired by Cadence Design Systems (founded in 1988). Unfortunately, for all VLSI's initial competence in design tools, they were not leaders in semiconductor manufacturing technology. VLSI had not been timely in developing a 1.0 µm manufacturing process as the rest of the industry moved to that geometry in the late 80s. VLSI entered a long-term technology partnership with Hitachi and finally released a 1.0 µm process and cell library (actually more of a 1.2 µm library with a 1.0 µm gate).
As VLSI struggled to gain parity with the rest of the industry in semiconductor technology, the design flow was moving rapidly to a Verilog HDL and synthesis flow. Cadence acquired Gateway, the leader in Verilog hardware design language (HDL) and Synopsys was dominating the exploding field of design synthesis. As VLSI's tools were being eclipsed, VLSI waited too long to open the tools up to other fabrications and Compass Design Automation was never a viable competitor to industry leaders. Meanwhile, VLSI entered the merchant high speed static RAM (SRAM) market as they needed a product to drive the semiconductor process technology development. All the large semiconductor companies built high speed SRAMs with cost structures VLSI could never match. VLSI withdrew once it was clear that the Hitachi process technology partnership was working.
ARM Ltd was formed in 1990 as a semiconductor intellectual property licensor, backed by Acorn, Apple and VLSI. VLSI became a licensee of the powerful ARM processor and ARM finally funded processor tools. Initial adoption of the ARM processor was slow. Few applications could justify the overhead of an embedded 32- bit processor. In fact, despite the addition of further licensees, the ARM processor enjoyed little market success until they developed the novel 'thumb' extensions. Ericsson adopted the ARM processor in a VLSI chipset for its GSM handset designs in the early 1990s. It was the GSM boost that is the foundation of ARM the company/technology that it is today.
Only in PC chipsets, did VLSI dominate in the early 90s. This product was developed by five engineers using the 'Mega cells" in the VLSI library that led to a business unit at VLSI that almost equaled its ASIC business in revenue. VLSI eventually ceded the market to Intel because Intel was able to package-sell its processors, chipsets, and even board level products together. VLSI also had an early partnership with PMC, a design group that had been nurtured of British Columbia Bell. When PMC wanted to divest its semiconductor intellectual property venture, VLSI's bid was beaten by a creative deal by Sierra Semiconductor. The telecom business unit management at VLSI opted to go it alone. PMC Sierra became one of the most important telecom ASSP vendors.
Scientists and innovations from the 'design technology' part of VLSI found their way to Cadence Design Systems (by way of Redwood Design Automation). Compass Design Automation (VLSI's CAD and Library spin-off) was sold to Avant! Corporation, which itself was acquired by Synopsys.
Structured design:
Structured VLSI design is a modular methodology originated by Carver Mead and Lynn Conway for saving microchip area by minimizing the interconnect fabrics area. This is obtained by repetitive arrangement of rectangular macro blocks which can be interconnected using wiring by abutment. An example is partitioning the layout of an adder into a row of equal bit slices cells. In complex designs this structuring may be achieved by hierarchical nesting.
Structured VLSI design had been popular in the early 1980s but lost its popularity later because of the advent of placement and routing tools wasting a lot of area by routing, which is tolerated because of the progress of Moore's Law. When introducing the hardware description language KARL in the mid' 1970s, Reiner Hartenstein coined the term "structured VLSI design" (originally as "structured LSI design"), echoing Edsger Dijkstra's structured programming approach by procedure nesting to avoid chaotic spaghetti-structured programs.
WHAT IS VLSI?
VLSI stands for "Very Large-Scale Integration". This is the field which involves packing more and more logic devices into smaller and smaller areas.
Design/manufacturing of extremely small, complex circuitry using modified semiconductor material
Integrated circuit (IC) may contain millions of transistors, each a few mm in size
In olden days, when huge computers made of vacuum tubes could occupy an entire dedicated room and could do about 360 multiplications of 10- digit numbers in a second. Modern day computers are getting smaller, faster, and cheaper and more power efficient for every progressing second. The electronic miniaturizing started when the occurrence of semiconductor transistor by Bardeen (1947-48) and then the Bipolar Transistor by Shockley (1949) in the Bell Laboratory.
The first IC (Integrated Circuit) was invented by Jack Kilby in 1958, in the form of a Flip Flop our ability to pack more and more transistors onto a single chip has doubled roughly every 18 months, in accordance with the Moore’s Law. Such exponential or increasing development had never been seen in any other field and still it is continuing in major areas of research work.
History & Evolution:
The development of microelectronics spans a time which is even lesser than the average life expectancy of a human, and yet it has seen as many as four generations. In early 60’s, the low-density fabrication processes classified under Small Scale Integration (SSI) in which transistor count was limited to about 10. This rapidly gave way to Medium Scale Integration in the late 60’s when around 100 transistors could be placed on a single chip. It was the time when the cost of research began to decline, and private firms started entering the competition. Transistor- Transistor logic (TTL) offering higher integration densities than other IC families like ECL and became the basis of the first integrated circuit revolution. Early seventies marked the growth of transistor count to about 1000 per chip called the Large-Scale Integration. By mid eighties, the transistor count on a single chip had already exceeded 1000 and hence came the age of Very Large-Scale Integration or VLSI. Though many improvements have been made and the transistor count is still rising, further names of generations like ULSI are generally avoided.
Future of VLSI:
Generally, VLSI technology is used in the devices like computers, cell phones, digital cameras and any electronic gadget. There are certain key issues that serve as active areas of research and are constantly improving as the field continues to mature.
\ VLSI is dominated by the CMOS technology and much like other logic families, this too has its limitations which have been battled and improved upon since years. By taking the example of a processor, the process technology has rapidly shrunk from 180 nm in 1999 to 60nm in 2008 and now it stands at 45nm and attempts are being made to reduce it for 32nm.As the number of transistors increase, the power dissipation is increasing and also the noise.
Fig.1.1. Evolution of VLSI
Heat is generated per unit area. New alternatives like Gallium Arsenide technology are becoming an active area of research, future of VLSI seems to change for every little moment.
History of Scale Integration:
SI - Small-Scale Integration (0-102)
MSI - Medium-Scale Integration (102-103)
LSI - Large-Scale Integration (103-105)
VLSI - Very Large-Scale Integration (105-107)
ULSI - Ultra Large-Scale Integration (>=107)
System Design:
Switches:
Tools: Verilog, VHDL, System C
Synthesizable (PLD’s and/or ASIC)
Non-synthesizable
More in future lectures
Digital equipment is largely composed of switches are
Switches can be built from many technologies
Relays (from which the earliest computers were built)
Thermionic valves
Transistors
The perfect digital switch would have the following
Switch instantly
Use no power
Have an infinite resistance when off and zero resistance when on
Semiconductors and Doping:
By adding trace amounts of certain materials to semiconductors alters the crystal structure and can change their electrical properties in particular it can change the number of free electrons or holes. N-Type semiconductor has free electrons and dopant is (typically) phosphorus, arsenic, antimony-Type semiconductor has free holes dopant is (typically) boron, indium, and gallium. Do pants are usually implanted into the semiconductor using Implant Technology, followed by thermal process to diffuse the dopants
Metal-oxide-semiconductor (MOS) and related VLSI technology:
Pmos, nMOS, CMOS, BiCMOS, GaAs
Basic MOS Transistors are implemented as minimum line width, transistor cross section, Charge inversion channel, Source connected to substrate, Enhancement vs. Depletion mode devices, Pmos are 2.5 time slower than nMOS due to electron and hole motilities.
Fabrication Technology:
It is Silicon of extremely high purity and chemically purified then grown into large crystals. Wafer is type of crystal which is sliced into substrates, and its diameter is currently 150mm, 200mm, 300mm and wafer thickness <1mm and surface is polished to optical smoothness. Wafer is then ready for processing. Each wafer will yield many chips and the chip die size varies from about 5mmx5mm to 15mmx15mm.A whole wafer is processed at a time. Different parts of each die will be made P-type or N-type (small amount of other atoms intentionally introduced - doping
-implant). Interconnections are made with metal insulation and material used is typically SiO2, SiN. New materials are being investigated (low-k dielectrics).
In CMOS Fabrication p-well process, n-well process and twin-tub process all the devices on the wafer are made at the same time. After the circuitry has been placed on the chip, the chip is over glassed (with a passivation layer) to protect it only those areas which connect to the outside world will be left uncovered (the pads). The wafer finally passes to a test station test probes send test signal patterns to the chip and monitor the output of the chip. The yield of a process is the percentage of die which pass this testing, the wafer is then scribed and separated up into the individual chips. These are then packaged and Chips are ‘binned’ according to their performance.
CONVENTIONAL APPROACH TO DIGITAL DESIGN:
Digital ICs of SSI and MSI types have become universally standardized and havebeen accepted for use. Whenever a designer has to realize a digital function, he uses a standard set of ICs along with a minimal set of additional discrete circuitry.
DESIGN OF VLSI:
The complexity of VLSI is being designed and used today, which makes the manual approach to be impractical. Design automation is the order of the day. With the rapid technological developments in the last two decades, the status of VLSI technology is characterized by the following
A steady increase in the size and hence the functionality of the ICs.
A steady reduction in feature size and hence increase in the speed of operation as well as gate or transistor density.
A steady improvement in the predictability of circuit behavior.
A steady increase in the variety and size of software tools for VLSI design.
The above developments have resulted in a proliferation of approaches to VLSI design. We briefly describe the procedure of automated design flow.
The aim is more to bring out the role of a Hardware Description Language (HDL) in the design process. An abstraction-based model is the basis of the automated design.
The model divides the whole design cycle into various domains. With such an abstraction through a division process the design is carried out in different layers. The designer at one layer can function without bothering about the layers above or below. The thick horizontal lines separating the layers in the figure signify the compartmentalization. As an example, let us consider design at the gate level. The circuit to be designed would be described in terms of truth tables and state tables. With these as available inputs, he has to express them as Boolean logic equations and realize them in terms of gates and flip-flops. In turn, these form the inputs to the layer immediately below. Compartmentalization of the approach to design in the manner described here is the essence of abstraction; it is the basis for development and use of CAD tools in VLSI design at various levels.
The design methods at different levels use the respective aids such as Boolean equations, truth tables, state transition table, etc. But the aids play only a small role in the process. To complete a design, one may have to switch from one tool to another, raising the issues of tool compatibility and learning new environments.
VLSI AND SYSTEMS:
These advantages of integrated circuits translate into advantages at the system
level:
Smaller physical size:
Smallness is often an advantage in itself-consider portable televisions or handheld cellular telephones.
Lower power consumption:
Replacing a handful of standard parts with a single chip reduces total power consumption. Reducing power consumption has a ripple effect on the rest of the system: a smaller, cheaper power supply can be used; since less power consumption means less heat, a fan may no longer be necessary; a simpler cabinet with less shielding for electromagnetic shielding may be feasible, too.
Reduced cost:
Reducing the number of components, the power supply requirements, cabinet costs, and so on, will inevitably reduce system cost. The ripple effect of integration is such that the cost of a system built from custom ICs can be less, even though the individual ICs cost more than the standard parts they replace. Understanding why integrated circuit technology such profound influence on the design of digital systems has requires understanding both the technology of IC manufacturing and the economics of ICs and digital systems.
Electronic systems now perform a wide variety of tasks in daily life. Electronic systems in some cases have replaced mechanisms that operated mechanically, hydraulically, or by other means; electronics are usually smaller, more flexible, and easier to service. In other cases, electronic systems have created totally new applications. Electronic systems perform a variety of tasks some of them are visible while some are hidden. Personal entertainment systems such as portable MP3 players and DVD players perform sophisticated algorithms with remarkably little energy.
Electronic systems in cars operate stereo systems and displays; they also control fuel injection systems, adjust suspensions to varying terrain, and perform the
control functions required for anti-lock braking systems. Digital electronics compress and decompress video, even at high-definition data rates, on-the-fly in consumer electronics. Low-cost terminals for Web browsing still require sophisticated electronics, despite their dedicated function. Personal computers and workstations provide word-processing, financial analysis, and games. Computers include both central processing units and special-purpose hardware for disk access, faster screen display, etc. Medical electronic systems measure bodily functions and perform complex processing algorithms to warn about unusual conditions. The availability of these complex systems, far from overwhelming consumers, only creates demand for even more complex systems.
The growing sophistication of applications continually pushes the design and manufacturing of integrated circuits and electronic systems to new levels of complexity. And perhaps the most amazing characteristic of this collection of systems is its variety-as systems become more complex, we build not a few general-purpose computers but an ever-wider range of special-purpose systems. Our ability to do so is a testament to our growing mastery of both integrated circuit manufacturing and design, but the increasing demands of customers continue to test the limits of design and manufacturing.
ASIC:
An Application-Specific Integrated Circuit (ASIC) is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use. For example, a chip designed solely to run a cell phone is an ASIC. Intermediate between ASICs and industry standard integrated circuits, like the 7400 or the 4000 series, are application specific standard products (ASSPs).
As feature sizes have shrunk and design tools improved over the years, the maximum complexity (and hence functionality) possible in an ASIC has grown from 5,000 gates to over 100 million. Modern ASICs often include entire 32-bit processors, memory blocks including ROM, RAM, EEPROM, Flash and other large building blocks. Such an ASIC is often termed a SoC (system-on-a-chip). Designers of digital ASICs use a hardware description language (HDL), such as Verilog or VHDL, to describe the functionality of ASICs.
An application-specific integrated circuit (ASIC) is an integrated circuit (IC) customized for a particular use, rather than intended for general-purpose use.
A Structured ASIC falls between an FPGA and a Standard Cell-based ASIC
arrangement of known cells
ASIC DESIGN FLOW:
As with any other technical activisty, development of an ASIC starts with an idea and takes tangible shape through the stages of development as shown in Fig 1 and in Fig 2. The first step in this process is to expand the idea in terms of behavior of the target circuit.
Fig 1.2: ASIC Design flow
The design is tested through a simulation process; it is to check, verify, and ensure that what is wanted is what is described. Simulation is carried out through dedicated tools. With every simulation run, the simulation results are studied to identify errors in the design description. The errors are corrected, and another simulation run carried out. Simulation and changes to design description together form a cyclic iterative process, repeated until an error-free design is evolved.
APPLICATIONS OF VLSI:
CHAPTER 2 CMOS TECHNOLOGY
Complementary metal–oxide–semiconductor (CMOS):
CMOS is also sometimes referred to as complementary-symmetry metal– oxide–semiconductor (or COS-MOS). The words "complementary-symmetry" refer to the fact that the typical design style with CMOS uses complementary and symmetrical pairs of p-type and n-type metal oxide semiconductor field effect transistors (MOSFETs) for logic functions.
Two important characteristics of CMOS devices are high noise immunity and low static power consumption. Since one transistor of the pair is always off, the series combination draws significant power only momentarily during switching between on and off states. Consequently, CMOS devices do not produce as much waste heat as other forms of logic, for example transistor–transistor logic (TTL) or NMOS logic, which normally have some standing current even when not changing state. CMOS also allows a high density of logic functions on a chip. It was primarily for this reason that CMOS became the most used technology to be implemented in VLSI chips.
CMOS circuits use a combination of p-type and n-type metal–oxide–semiconductor field-effect transistors (MOSFETs) to implement logic gates and other digital circuits. Although CMOS logic can be implemented with discrete devices for demonstrations, commercial CMOS products are integrated circuits composed of up to billions of transistors of both types, on a rectangular piece of silicon of between 10 and 400 mm2.
CMOS circuits are constructed in such a way that all PMOS transistors must have either an input from the voltage source or from another PMOS transistor. Similarly, all NMOS transistors must have either an input from ground or from another NMOS transistor. The composition of a PMOS transistor creates low resistance between its source and drain contacts when a low gate voltage is applied and high resistance when a high gate voltage is applied. On the other hand, the composition of an NMOS transistor creates high resistance between source and drain when a low gate voltage is applied and low resistance when a high gate voltage is applied. CMOS accomplishes current reduction by complementing every nMOSFET with a pMOSFET and connecting both gates and both drains together. A high voltage on the gates will cause the nMOSFET to conduct and the pMOSFET to not conduct, while a low voltage on the gates causes the reverse. This arrangement greatly reduces power consumption and heat generation. However, during the switching time, both MOSFETs conduct briefly as the gate voltage goes from one state to another. This induces a brief spike in power consumption and becomes a serious issue at high frequencies.
The image on the right shows what happens when an input is connected to both a PMOS transistor (top of diagram) and an NMOS transistor (bottom of diagram). When the voltage of input A is low, the NMOS transistor's channel is in a high resistance state. This limits the current that can flow from Q to ground. The PMOS transistor's channel is in a low resistance state and much more current can flow
from the supply to the output. Because the resistance between the supply voltage and Q is low, the voltage drops between the supply voltage and Q due to a current drawn from Q is small. The output therefore registers a high voltage.
On the other hand, when the voltage of input A is high, the PMOS transistor is in an OFF (high resistance) state so it would limit the current flowing from the positive supply to the output, while the NMOS transistor is in an ON (low resistance) state, allowing the output from drain to ground. Because the resistance between Q and ground is low, the voltage drops due to a current drawn into Q placing Q above ground is small. This low drop results in the output registering a low voltage.
In short, the outputs of the PMOS and NMOS transistors are complementary such that when the input is low, the output is high, and when the input is high, the output is low. Because of this behavior of input and output, the CMOS circuit's output is the inverse of the input.
The power supplies for CMOS are called VDD and VSS, or VCC and Ground (GND) depending on the manufacturer. VDD and VSS are carryovers from conventional MOS circuits and stand for the drain and source supplies. These do not apply directly to CMOS, since both supplies are really source supplies. VCC and Ground are carryovers from TTL logic, and that nomenclature has been retained with the introduction of the 54C/74C line of CMOS.
Duality:
An important characteristic of a CMOS circuit is the duality that exists between its PMOS transistors and NMOS transistors. A CMOS circuit is created to allow a path always to exist from the output to either the power source or ground. To accomplish this, the set of all paths to the voltage source must be the complement of the set of all paths to ground. This can be easily accomplished by defining one in terms of the NOT of the other. Due to the De Morgan's laws based logic, the PMOS transistors in parallel have corresponding NMOS transistors in series while the PMOS transistors in series have corresponding NMOS transistors in parallel.
Logic:
Fig.2.2. NAND gate in CMOS logic
More complex logic functions such as those involving AND and OR gates require manipulating the paths between gates to represent the logic. When a path consists of two transistors in series, both transistors must have low resistance to the corresponding supply voltage, modeling’s an AND. When a path consists of two 1111111transistors in parallel, either one or both transistors must have low resistance to connect the supply voltage to the output, modeling an OR.
Shown on the right is a circuit diagram of a NAND gate in CMOS logic. If both A and B inputs are high, then both the NMOS transistors (bottom half of the diagram) will conduct, neither of the PMOS transistors (top half) will conduct, and a conductive path will be established between the output and Vss (ground), bringing the output low. If both A and B inputs are low, then neither of the NMOS transistors will conduct, while both PMOS transistors will conduct, establishing a conductive path between the output and Vdd (voltage source), bringing the output high. If either of the A or B inputs is low, one of the NMOS transistors will not conduct, one of the PMOS transistors will, and a conductive path will be established between the output and Vdd (voltage source), bringing the output high. As the only configuration of the two inputs that results in a low output is when both are high, this circuit implements a NAND (NOT AND) logic gate. An advantage of CMOS over NMOS is that both low-to-high and high-to-low output transitions are fast since the pull-up transistors have low resistance when switched on, unlike the load resistors in NMOS logic. See Logical effort for a method of calculating delay in a CMOS circuit.
Example: NAND gate in physical layout
Fig.2.3. NAND gate in physical layout
The physical layout of a NAND circuit. The larger regions of N-type diffusion and P- type diffusion are part of the transistors. The two smaller regions on the left are taps to prevent latch up.
Fig.2.4: NAND gate Fabrication
Simplified process of fabrication of a CMOS inverter on p-type substrate in semiconductor micro fabrication. Note: Gate, source and drain contacts are not normally in the same plane in real devices, and the diagram is not to scale.
This example shows a NAND logic device drawn as a physical representation as it would be manufactured. The physical layout perspective is a "bird's eye view" of
a stack of layers. The circuit is constructed on a P-type substrate. The polysilicon, diffusion, and n-well are referred to as "base layers" and are actually inserted into trenches of the P-type substrate. The contacts penetrate an insulating layer between the base layers and the first layer of metal (metal1) making a connection.
The inputs to the NAND (illustrated in green color) are in polysilicon. The CMOS transistors (devices) are formed by the intersection of the polysilicon and diffusion; N diffusion for the N device & P diffusion for the P device (illustrated in salmon and yellow coloring respectively). The output ("out") is connected together in metal (illustrated in cyan coloring). Connections between metal and polysilicon or diffusion are made through contacts (illustrated as black squares). The physical layout example matches the NAND logic circuit given in the previous example.
The N device is manufactured on a P-type substrate while the P device is manufactured in an N-type well (n-well). A P-type substrate "tap" is connected to VSS and an N-type n-well tap is connected to VDD to prevent latch up.
Fig.2.5. N-well CMOS process
Cross section of two transistors in a CMOS gate, in an N-well CMOS process
CMOS logic dissipates less power than NMOS logic circuits because CMOS dissipates power only when switching ("dynamic power"). On a typical ASIC in a modern 90 nanometer process, switching the output might take 120 picoseconds, and happens once every ten nanoseconds. NMOS logic dissipates power whenever the transistor is on, because there is a current path from Vdd to Vss through the load resistor and the n-type network.
Static dissipation:
Static CMOS gates are very power efficient because they dissipate nearly zero power when idle. Earlier, the power consumption of CMOS devices was not the major concern while designing chips. Factors like speed and area dominated the design parameters. As the CMOS technology moved below sub-micron levels the power consumption per unit area of the chip has risen tremendously.
Broadly classifying, power dissipation in CMOS circuits occurs because of two components:
Sub threshold conduction when the transistors are off:
Both NMOS and PMOS transistors have a gate–source threshold voltage, below which the current (called sub threshold current) through the device drops exponentially. Historically, CMOS designs operated at supply voltages much larger than their threshold voltages (Vdd might have been 5 V and Vth for both NMOS and PMOS might have been 700 mV). A special type of the CMOS transistor with near zero threshold voltage is the native transistor.
Tunneling current through gate oxide:
SiO2 is a very good insulator, but at very small thickness levels electrons can tunnel across the very thin insulation; the probability drops off exponentially with oxide thickness. Tunneling current becomes very important for transistors below 130 nm technology with gate oxides of 20 Å or thinner.
Leakage current through reverse-biased diodes:
Small reverse leakage currents are formed due to formation of reverse bias between diffusion regions and wells (for e.g., p-type diffusion vs. n-well), wells and substrate (for e.g., n-well vs. p-substrate). In modern process diode leakage is very small compared to sub threshold and tunneling currents, so these may be neglected during power calculations.
Dynamic dissipation.
Charging and discharging of load capacitances:
CMOS circuits dissipate power by charging the various load capacitances (mostly gate and wire capacitance, but also drain and some source capacitances) whenever they are switched. In one complete cycle of CMOS logic, current flows from VDD to the load capacitance to charge it and then flows from the charged load capacitance (CL) to ground during discharge. Therefore, in one complete charge/discharge cycle, a total of Q=CLVDD is thus transferred from VDD to ground. Multiply by the switching frequency on the load capacitances to get the current used, and multiply by the average voltage again to get the characteristic switching power dissipated by a CMOS device.
Short-circuit power dissipation:
Since there is a finite rise/fall time for both Pmos and nMOS, during transition, for example, from off to on, both the transistors will be on for a small period of time in which current will find a path directly from VDD to ground, hence creating a short-circuit current. Short-circuit power dissipation increases with rise and fall time of the transistors.
An additional form of power consumption became significant in the 1990s as wires on chip became narrower and the long wires became more resistive. CMOS gates at the end of those resistive wires see slow input transitions. During the middle of these transitions, both the NMOS and PMOS logic networks are partially conductive and current flows directly from VDD to VSS. The power thus used is called crowbar power. Careful design which avoids weakly driven long skinny wires ameliorates this effect, but crowbar power can be a substantial part of dynamic CMOS power. To speed up designs, manufacturers have switched to constructions that have lower voltage thresholds but because of this a modern NMOS transistor with a Vth of 200 mV has a significant sub threshold leakage current. Designs (e.g. desktop processors) which include vast numbers of circuits which are not actively switching still consume power because of this leakage current. Leakage power is a significant
portion of the total power consumed by such designs. Multi-threshold CMOS (MTCMOS), now available from foundries, is one approach to managing leakage power. With MTCMOS, high Vth transistors are used when switching speed is very important, while low Vth transistors are used in speed sensitive paths. Further technology advances that use even thinner gate dielectrics have an additional leakage component because of current tunneling through the extremely thin gate dielectric. Using high-k dielectrics instead of silicon dioxide that is the conventional gate dielectric allows similar device performance, but with a thicker gate insulator, thus avoiding this current. Leakage power reduction using new material and system designs is critical to sustaining scaling of CMOS.
ADVANTAGES OF VLSI:
Smaller physical size
Lower power consumption
Reduced cost
CHAPTER-3 PROJECT DISCRIPTION
Existing system:
Before introducing this physical designing, technique there was a technique called logical designing. In that technique we used logic gates for designing any circuit. Here some gates are designed as shown below,
AND gate:
Fig3.1. Schematic diagram of AND gate
For and gate, there are two inputs and one output, whenever both inputs are high then only, we get output as high, in other conditions we get output as zero. Here we need 6 transisters to design one and gate. And waveform is as shown below
Fig3.2. Timing diagram of AND gate
NAND gate:
Fig3.3. Schematic diagram of NAND gate
The above circuit indicates the nand gate, which has two inputs and one output,if any one of the input become low then the output will be high,otherwise output will be low. For this nand gate we require 4 transisters, and the wave forms are shown as below,
Fig3.4. Timing diagram of NAND gate
NOR gate:
Fig.3.5. Schematic diagram of NOR gate
The above circuit indicates nor gate, which has two inputs and one output, whenever both the inputs are low, then only output will be high otherwise zero
Fig3.6. Timing diagram of NOR gate
MUX:
Fig3.7. Schematic diagram of MUX
The above circuit indicates mux, it has 3 inputs in that one of the input is a select line, the waveforms are as shown below.For designing one mux we require 20 transisters.
Fig3.8. Timing diagram of MUX gate
Till now we have seen the different gates and mux designing, in all of those we have seen that for designing each gate or mux we require more transistors. But, in any circuit designing we require a large number of gates or muxes, then the complexity of designing will increase. To remove this drawback, we are going for Deep submicron CMOS technology.
PROPOSED SYSTEM:
12T Memory Cell for Aerospace Application in Nano Scale CMOS Technology: Introduction:
MEMORIES are extensively used in aerospace applications as the medium to store data in which single event upsets (SEUs) induced by radiation particles are becoming one of the most significant issues Because they can conduce to the data corruption in a memory chip and the circuit itself is not permanently damaged, SEUs are also described as the soft errors . Therefore, SEUs can cause a malfunctioning of an electronic system. In some critical memory applications (e.g., satellite equipment and cardioverter defibrillators SEUs can be detrimental and crucial. However, radiation hardening techniques for memories are one of the bottlenecks in providing fault tolerance. For many years, some radiation-hardening-by-design (RHBD) techniques have been used to tolerate soft errors in memories using standard commercial CMOS foundry processes, with no modifications to the existing process or violation of design rules. Traditionally, these techniques can be mainly divided into the following three subitem techniques
CHAPTER-4 SCHEMATIC DIAGRAMS
Cell Schematic and Write/Read Timing: The proposed RHBD 12T memory cell is shown in Fig. 1. Here, two access transistors, pMOS transistors P5 and P6, have been connected bit-lines BLN and BL to the output nodes QN and Q, respectively. Their ON/OFF state is determined by a word-line WL. It should be noted that when a radiation particle strikes pMOS transistor, only a positive transient pulse (0→1 or 1 → 1 transient pulse) can be generated; on the contrary, only a negative transient pulse (1 → 0 or 0→0 transient pulse) can be induced when a radiation particle strikes nMOS transistor [2]. Therefore, in order to avoid a negative transient pulse induced by a radiation particle in Q and QN nodes, pMOS transistors (i.e., transistors P6 and P5) are used as access transistors. Considering the stored 1 state (i.e., QN = 0, Q = 1, S0 = 0, and S1 = 1) for the proposed RHBD 12T cell (see Fig. 1). 1) When word-line WL is high state 1, transistors P1, P4, P7, N2, and N3 are ON, and the remaining transistors are OFF. Thus, nodes Q and QN are not changed, and they also stored their original data, respectively. 2) Before read operation is executed in the proposed 12T memory cell, two bit-lines BL and BLN need to be precharged to supply voltage VDD. After read operation, and word-line WL is 0 state, the output node Q will store its original state 1 without changing. however, because transistors p5, p7, and n2 are on, bit line bln will be discharged, next, when the voltage difference between two bit-lines bl and bln are obtained, the differential sense amplifier in memories will output the stored data. 3) to write data 0 into the proposed 12t cell, word-line wl and bit line bl need to be 0 state, and bit line bln must be 1 state, subsequently, node q will be pulled down to 0 state, and node qn will be pulled up to 1 state, transistors p2, p3, p8, n1, and n4 will be on, and transistors p1, p4, p7, n2, and n3 will be off. when word-line wl is pulled back to high state 1, the stored data will be
0. this means that data 0 can be successfully written into the proposed robed 12t memory cell. fig. 3.9 shows a “write 0, read 0, write 1, and read 1” transient simulation result. we can see that the proposed cell can rightly achieve write and read operations.
Circuit diagram:
Fig.3.9 Proposed 12T memory cell.
B.SEU Recovery Analysis: in this section, the seu recovery analysis results for the proposed robed 12t memory cell are presented. Considering the state shown in fig. 1, node q is not a sensitive node, because it is connected with the drain area of off pmos transistors p6 and p8, and its stored value is 1 state. therefore, according to value of node q is not changed the upset physical mechanism, when node q is strike, only a positive pulse is induced, i.e., node q will be affected by a 1 → 1 transient pulse so that the stored
TRUTH TABLE:
when node qn is upset by a radiation particle, node qn will be pulled up to state 1, then transistors p1 and p4 will be off. subsequently, nodes q and s1 will remain the original logic 1 state without losing voltage value. therefore, transistor n3 will not be off, and node s0 keeps its original 0 state. transistor p7 and n2 will be on state, and then, node qn will be pulled back to its original state 0.
when a radiation particle strikes node s0, its value will be changed. then, transistor p7 is temporarily turned off, and transistor n1 is temporarily turned on, and thus, node s1 will be pulled down to 0. however, because of capacitive effect, node qn will not be changed to 1 state, and transistors N4 and P1 keep their OFF and ON states,
respectively. Therefore, node Q will be unchanged through ON transistor P4, and node S1 can recover to initial 1 state. Finally, transistor N3 are turned ON, and node S0 will be pulled back to its original 0 state.
When the state of node S1 is changed to 0 from original 1 state by a radiation particle, transistors N3 and P8 will be turned OFF and ON, respectively. Because the voltages of nodes Q, QN, and S0 will be unchanged, transistor P1 remains ON. Therefore, node S1 will be pulled back to its original 1 state through ON transistor P1.
When a radiation particle strikes a semiconductor device because of charge sharing effect, multiple sensitive nodes may be affected. In the proposed 12T memory cell, if node pair S0–S1 is upset, transistors P7 and P8 will be temporarily turned OFF and ON, respectively. Subsequently, the analysis is the same as the analysis when the stored value of node S0 is changed. Therefore, nodes S0 and S1 will be pulled back to the original state, respectively.
Due to charge sharing effect, the voltage of node pair S0–QN or S1–QN can be changed, the stored state of the proposed 12T cell will be changed. Because both transistors P8 and N4 will be ON, and thus, node Q will be pulled down to state 0. This case is similar to a write 0 operation. Therefore, when node S0 or S1 or QN or node pair S0–S1 in the proposed RHBD 12T cell is upset by a radiation particle, the stored data can be recovered from a corrupted data. When node pair S0–QN or S1– QN is upset, the stored data cannot be recovered. However, when the spacing of node QN and node pair S0–S1 is large enough, the possibilities of the multiple-node upset cases can be minimized. Fig. 3 shows the layout of the proposed 12T memory cell in which the transistors–transistor spacings of both node pairs S1–QN and S0–QN is greater than the effective range of charge sharing (about 1.5 μm [25], [26]). Therefore, in this paper, we focus only on the case when node pair S0–S1 is changed by a radiation particle.
WRITE OPERATION:
|
WL
|
BL
|
BLN
|
Q
|
Q n
|
Write
'0'
|
0
|
0
|
1
|
0
|
1
|
Write
'1'
|
1
|
1
|
0
|
1
|
0
|
READ OPERATION:
|
WL
|
BL
|
BLN
|
Q
|
Q n
|
Read
|
0
|
1
|
0
|
1
|
0
|
CHAPTER-5 TOOLS
DSCH and MICROWIND:
Double click on the DSCH application, then it will open a work window where we have to draw the schematic.
Then go for the file menu, click on the select foundry, it will open a dialog box, then you select the location where the rul files has existed.
Select the required rule file, then clcick on the symbol library as shown in diagram. So that you will get list of symbols
Select the set of symbols used in the schematic which you want to design. Then drag the symbol s from the library by taking left click to the work window.
After completing circuit we have to simulate it for checking the working condition.
For that click on the run option as shown in below
After running particular period of time click on the wave form , it will display the input and output resp[onse in the design
Then it will display a dialogue box with verilog code then click on ok.
Then here we will not get power analysisand delay, to accomplish this we have use MICROWIND tool. Double click on the application. Then we will get window as shown
Then click on the file menu, we will get a list of option, click on the select foundry option.
Then it will opens a window as shown in the diagram. Then select the location in the where we have rule files , from that select the required rule file, then click on open
After selecting particular rule we have to check the power requirements, that will be done by creating layout to the schematic. preivously we have created the verilog file. That file has to be compiled here to create schematic for that click on the compile menu, select compile verilog file
Then it cretaes a dialog box as shown here, there we have select location and file name, then click on open.
Then again we will get dialog box with name verilog file, then its having two options compile, back to editor. Click on the compile so that verilog file will be compiledgive us the description of the pin diagram.
Then we have to check the functionality of the device for that we have to simulate it, for that go for simulate menu, select run simulation there we have set of options like
Voltage vs time. Voltage vs current. static voltage vs volatge.
ferquency vs timeSelect the required one.
Then we will get the wave forms as shown below, with the required power, we can also select the type of the simulation that may be
BSIM, LEVEL1,
LEVEL4
according to selection criteria we will get the necessary wave forms as shown below
CHAPTER-6 LANGUAGES
6.1 VHDL & VERILOG:
Both VHDL and Verilog are well established hardware description languages. They have the advantage that the user can define high-level algorithms and low-level optimizations (gate-level and switch-level) in the same language. A basic example of VHDL code, the evaluation of the Fibonacci series, is shown below, and it is a good example of the points made above. The code itself is reasonably straightforward for a software programmer to understand, provided that he/she understands that this is a truly parallel language and all lines are executing “at once”. It is also straightforward to simulate a simple design of this nature. However, it is surprisingly difficult to implement it in hardware and this difficulty is a direct result of I/O issues. As noted above for a design to work in hardware access is required to resources that are external to the FPGA, such as memory, and an FPGA is, by its very nature, unaware of the components to which it is connected. If you want to retrieve a value from main memory and use it on the FPGA then you need to instantiate a memory controller. While systems such as the Cray XD1 provide cores for communicating with memory, such cores are still complex and unfamiliar to software programmers. Our early experiences with VHDL have indicated that it should only be used for FPGA development if you are in a position to work closely with experienced hardware designers throughout the development process.
Verilog HDL is one of the two most common Hardware Description Languages (HDL) used by integrated circuit (IC) designers. The other one is VHDL. HDL’s allows the design to be simulated earlier in the design cycle in order to correct errors or experiment with different architectures. Designs described in HDL are technology-independent, easy to design and debug, and are usually more readable than schematics, particularly for large circuits. Verilog can be used to describe designs at four levels of abstraction: Algorithmic level (much like c code with if, case and loop statements). Register transfer level (RTL uses registers connected by Boolean equations). Gate level (interconnected AND, NOR etc.), Switch level (the switches are MOS transistors inside gates).
The language also defines constructs that can be used to control the input and output of simulation. More recently Verilog is used as an input for synthesis programs which will generate a gate-level description (a net list) for the circuit. Some Verilog constructs are not synthesizable. Also the way the code is written will greatly affect the size and speed of the synthesized circuit. Most readers will want to synthesize their circuits, so no synthesizable constructs should be used only for test benches. These are program modules used to generate I/O needed to simulate the rest of the design. The words “not synthesizable” will be used for examples and constructs as needed that do not synthesize.
Verilog (released as IEEE 1364-2001) adds many significant enhancements to the Verilog language, which add greater support for configurable IP modeling and deep-submicron accuracy, and development of design management. Other enhancements make Verilog easier to use. These changes will affect everyone who uses the Verilog language, as well as those who implement Verilog software tools. This paper will review and highlight the main features added to the Verilog standard for the IEEE 1364-2001 update. The focus will be on new simulation and synthesis constructs. Where possible, status regarding Synopsys support for the new features will also be noted.
The Verilog Hardware Description Language was first introduced in 1984, as a proprietary language from Gateway Design Automation. The original Verilog language was designed to be used with a single product, the Gateway Verilog-XL digital logic simulator. In 1989, Gateway Design Automation was acquired by Cadence Design Systems. In 1990, Cadence released the Verilog Hardware Description Language and the Verilog Programming Language Interface (PLI) to the public domain. Open Verilog International (OVI) was formed to control the public domain Verilog, and to promote its usage. Cadence turned over to OVI the Frame Maker source files containing most, but not all, of the Cadence Verilog-XL user’s manual. This document became OVI’s Verilog 1.0 Reference Manual. In 1993, OVI released its Verilog 2.0 Reference Manual, which contained a few enhancements to the Verilog language, such as array of instances, and major enhancements to the Verilog PLI. OVI then submitted a request to the IEEE to formally standardize Verilog 2.0. The IEEE formed a standard working group to create the standard, and, in 1995, IEEE 1364-1995 became the official Verilog standard. It is important to note
that for Verilog-1995, the IEEE standards working group did not consider any enhancements to the Verilog language.
The goal was to standardize the Verilog language the way it was being used at that time. The IEEE working group also decided not to create an entirely new document for the IEEE 1364 standard. Instead, the OVI Frame Maker files were used to create the IEEE standard. Since the origin of the OVI manual was Gateway’s Verilog-XL user’s manual, the IEEE 1364-1995 and IEEE 1364-2001 Verilog language reference manuals are still organized somewhat like a user’s guide.
Goals for Verilog standard Work on the IEEE 1364-2001 Verilog standard began in January 1997. Three major goals were established:
Enhance the Verilog language to help with today’s deep-submicron and intellectual property modeling issues.
Ensure that all enhancements were both useful and practical, and that simulator and synthesis
Vendors would implement Verilog-2000 in their products. Correct any errata or ambiguities in the IEEE 1364-1995 Verilog Language Reference Manual.
Many enhancements improve the ease and accuracy of writing synthesizable RTL models. Other enhancements allow models to be more scalable and re-usable. With the exception of the following paragraph, only changes which add new functionality or syntax are listed here. Verilog also contains many clarifications to Verilog-1995, which do not add new functionality. Notes are added to the sub- sections indicating Synopsys support with Presto and VCS at the time this paper was completed. Since the inception of Verilog in 1984, the term “register” has been used to describe the group of variable data types in the Verilog language. “Register” is not a keyword, it is simply a name for a class of data types, namely: reg, integer, time, real, and real time. The use of term “register” is often a source of confusion for new users of Verilog, who sometimes assume that the term implies a hardware register (flip-flops).
Verilog:
Hardware description languages such as Verilog differ from software programming languages because they include ways of describing the propagation of time and signal dependencies (sensitivity). There are two assignment operators, a blocking assignment (=), and a non-blocking (<=) assignment. The non- blocking assignment allows designers to describe a state-machine update without needing to declare and use temporary storage variables. Since these concepts are part of Verilog language semantics, designers could quickly write descriptions of large circuits in a relatively compact and concise form. At the time of Verilog introduction (1984), Verilog represented a tremendous productivity improvement for circuit designers who were already using graphical schematic capture software and specially written software programs to document and simulate electronic circuits.
The designers of Verilog wanted a language with syntax similar to the C programming language, which was already widely used in engineering software development. Like C, Verilog is case-sensitive and has a basic preprocessor (though less sophisticated than that of ANSI C/C++). Its control flow keywords (if/else, for, while, case, etc.) are equivalent, and its operator precedence is compatible. Syntactic differences include variable declaration (Verilog requires bit-widths on net/reg types), demarcation of procedural blocks (begin/end instead of curly braces {}), and many other minor differences.
A Verilog design consists of a hierarchy of modules. Modules encapsulate design hierarchy and communicate with other modules through a set of declared input, output, and bidirectional ports. Internally, a module can contain any combination of the following: net/variable declarations (wire, reg, integer, etc.), concurrent and sequential statement blocks, and instances of other modules (sub- hierarchies). Sequential statements are placed inside a begin/end block and executed in sequential order within the block. However, the blocks themselves are executed concurrently, making Verilog a dataflow language.
Verilog concept of 'wire' consists of both signal values (4-state: "1, 0, floating, undefined") and strengths (strong, weak, etc.). This system allows abstract modeling of shared signal lines, where multiple sources drive a common net. When a wire has multiple drivers, the wire's (readable) value is resolved by a function of the source drivers and their strengths. Subsets of statements in the Verilog language
are synthesizable. Verilog modules that conform to a synthesizable coding style, known as RTL (register-transfer level), can be physically realized by synthesis software. Synthesis software algorithmically transforms the (abstract) Verilog source into a net list, a logically equivalent description consisting only of elementary logic primitives (AND, OR, NOT, flip-flops, etc.) that are available in a specific FPGA or VLSI technology. Further manipulations to the net list ultimately lead to a circuit fabrication blueprint (such as a photo mask set for an ASIC or a bit stream file for an FPGA).
Definition of constants:
The definition of constants in Verilog supports the addition of a width parameter. The basic syntax is:
<Width in bits>'<base letter><number> Examples:
There are several statements in Verilog that have no analog in real hardware, e.g.
$display. Consequently, much of the language cannot be used to describe hardware. The examples presented here are the classic subset of the language that has a direct mapping to real gates.
There are two separate ways of declaring a Verilog process. Both constructs begin execution at simulator time 0, and both execute until the end of the block. Once an always block has reached its end, it is rescheduled (again). It is a common misconception to believe that an initial block will execute before an always block. In fact, it is better to think of the initial-block as a special-case of the always-block, one which terminates after it completes for the first time.
CHAPTER-7 DESIGN RULE CHECKER
7.1 Introduction to DRC:
Design Rules are a series of parameters provided by semiconductor manufacturers that enable the designer to verify the correctness of a mask set. Design rules are specific to a semiconductor manufacturing process. A design rule set specifies certain geometric and connectivity restrictions to ensure sufficient margins to account for variability in semiconductor manufacturing processes, so as to ensure that most of the parts work correctly.
The most basic design rules are shown in the diagram on the right. The first are single layer rules. A width rule specifies the minimum width of any shape in the design. A spacing rule specifies the minimum distance between two adjacent objects. These rules will exist for each layer of semiconductor manufacturing process, with the lowest layers having the smallest rules (typically 100 nm as of 2007) and the highest metal layers having larger rules (perhaps 400 nm as of 2007).
A two-layer rule specifies a relationship that must exist between two layers. For example, an enclosure rule might specify that an object of one type, such as a contact or via, must be covered, with some additional margin, by a metal layer. A typical value as of 2007 might be about 10 nm.
There are many other rule types not illustrated here. A minimum area rule is just what the name implies. Antenna rules are complex rules that check ratios of areas of every layer of a net for configurations that can result in problems when intermediate layers are etched. Many other such rules exist and are explained in detail in the documentation provided by the semiconductor manufacturer.
Academic design rules are often specified in terms of a scalable parameter, λ, so that all geometric tolerances in a design may be defined as integer multiples of λ. This simplifies the migration of existing chip layouts to newer processes. Industrial rules are more highly optimized, and only approximate uniform scaling. Design rule sets have become increasingly more complex with each subsequent generation of semiconductor process.
The main objective of design rule checking (DRC) is to achieve a high overall yield and reliability for the design. If design rules are violated the design may not be functional. To meet this goal of improving die yields, DRC has evolved from simple measurement and Boolean checks, to more involved rules that modify existing features, insert new features, and check the entire design for process limitations such as layer density. A completed layout consists not only of the geometric representation of the design, but also data that provides support for the manufacture of the design.
While design rule checks do not validate that the design will operate correctly, they are constructed to verify that the structure meets the process constraints for a given design type and process technology.
DRC software usually takes as input a layout in the GDSII standard format and a list of rules specific to the semiconductor process chosen for fabrication. From these it produces a report of design rule violations that the designer may or may not choose to correct. Carefully "stretching" or waiving certain design rules is often used to increase performance and component density at the expense of yield.
DRC products define rules in a language to describe the operations needed to be performed in DRC. For example, Mentor Graphics uses Standard Verification Rule Format (SVRF) language in their DRC rules files and Magma Design Automation is using Tcl-based language. A set of rules for a particular process is referred to as a run- set, rule deck, or just a deck.
DRC is a very computationally intense task. Usually DRC checks will be run on each sub-section of the ASIC to minimize the number of errors that are detected at the top level. If run on a single CPU, customers may have to wait up to a week to get the result of a Design Rule check for modern designs. Most design companies require DRC to run in less than a day to achieve reasonable cycle times since the DRC will likely be run several times prior to design completion. With today's processing power, full-chip DRC's may run in much shorter times as quick as one hour depending on the chip complexity and size.
Some example of DRC's in IC design include:
CHAPTER-8 ADVANTAGES AND APPLICATIONS
ADVANTAGES:
Low Power Consumption.
Advanced in Technology.
DISADVANTAGES:
APPLICATIONS:
Whenever we require low power concepts, we can use deep submicron technology for 28nm technology.
In sequential, we will get more delay, there we can reduce delay using these flip flops.
SNAPSHOTS:
LAYOUT:
SIMULATION RESULT:
FUTURE SCOPE
CONCLUSION
In this paper, an exhaustive analysis and design methodology for commonly used high-speed flip-flops topologies in 28nm CMOS technologies has been carried out. The comparison has been performed with area, delay and power dissipation. The impact of layout parasitic has been included in the transistor-level design phase. According to the presented results, the fastest topology are the C2CMOS and DET since the delay, with respect to area and number of transistor count TSPC and C2CMOS are better while with respect to power dissipation SET shows better result, the best low-power flip-flops are the SET. Moreover, the best topology under clock skew and less propagation delay are DET and C2CMOS.
We conclude that efficient design architecture based on power dissipation, propagation delay and transistor count for portable applications are TSPC, SET, DET and C2CMOS Flip-flop. Considerate the suitability of flip-flops and selecting the best topology for a given application is an important issue; the low power design SET is suitable for portable applications.
Above performance comparison shows that the C2CMOS and TSPC flip flop architecture shows better result on given key parameters compared with SET and
DET. This means that both architectures are suitable in low power, fast switching and minimum area applications.
REFERENCES:
Design of a Low Power Flip-Flop Using CMOS Deep Submicron Technology,
Modified SET D-Flip Flop Design for Low-Power VLSI Applications,
Double Edge Triggered Feedback Flip flop in sub 100 NM Technology,
(2013) Design and Analysis of High-Performance Double Edge Triggered D-Flip Flop International Journal of Recent Technology and Engineering (IJRTE).
IEEE transactions on very large-scale integration (vlsi) systems, Design Approaches for Low Power Low Area D Flip-Flops in Nano Technology
WEBSITES: