How does a car work

Published on January 2017 | Categories: Documents | Downloads: 42 | Comments: 0 | Views: 338
of 62
Download PDF   Embed   Report

Comments

Content

How does a car work?

When you‟re riding in a car, you may not necessarily think about what‟s making it go, aside from the grown -up in the driver‟s seat. But in truth, cars get their power and their ability to move from their very special construction and from energy-filled substances that fuel them. If you‟ve ever been to the gas station, then you probably already know that cars need gasolin e, or petroleum, to run. The car‟s engine runs on a mixture of gasoline and air. Just like a hot fire gives off energy when it burns, so does the gasoline! The spark that gets the whole thing going comes from the car‟s battery, which gets powered when the driver turns the key in the ignition. Once the whole thing is going, the energy the car gets from the gas it burns moves the car forward! A special part of the car called the “transmission” passes all the energy from the engine to the wheels. The driver th en uses the wheel to steer and guide the car in the direction in needs to go, and uses the car‟s brakes to slow the car down when at a stoplight or when arriving at the destination! The Transmission Controls the power contained in the crankshaft before it goes to the wheels and allows a driver to control the speed/power of a car by providing different speed/power ratios known as gears.So first gear gives plenty of power but little speed whereas fifth gear provides little power but plenty of speed.The crankshaft only connects to the transmission when the car is in gear and the clutch is engaged. If you press down on the clutch the crankshaft disconnects from the transmission.The transmission is connected to the output shaft, which is connected to the axles, which are connected to the wheels. When the transmission rotates the output shaft this turns the axles, which in turn rotate the wheels.

How a Motor Engine Works The more you understand about your car, the more capacities you will have to keep it running the correct way and to know where the trouble spots could be if it is not running as efficiently as you would like. If you want to ensure that your car keeps going the right way, one of the first things you want to understand is the motor and how it connects to everything in your vehicle. This will allow you to make adjustments needed and to have better knowledge of the vehicle that you have.

The motor of a car is like the heart of your vehicle. It is the one thing that connects everything together and ensures that your car continues to move. It‟s main purpose is to transfer the gasoline that moves into the engine area and then to move it back out so that your vehicle as fuel to run on. The motor is what will propel the gasoline into the various areas of the car, pushing it into the motion that keeps everything running effectively. Even further than this, everything that is connected to the engine also allows it to keep running at it‟s best. The first of these is the set of spark plugs that moves into the engine. When the engine pushes up, it will hit the spark plugs. When this happens, it will cause the spark plugs to ignite and will turn on the engine. When this happens, the gasoline is then able to push up through the cylinder of the engine and releases into an exhaust pipe. This is the cycle that the motor will follow in order to release the energy to move. In order to keep the motor engine completely functioning, there are other components that also allow the engine to continue to run smoothly. The first of these is the connecting rod, which is used as a support area as the piston rotates and releases the gas. The second is the crankshaft, which is used in order to keep the piston turning so that it can release the necessary gasoline at the right time. The last of these is the sump, which will collect the oil and lubricate the engine piston and other parts of the motoras they are turning, allowing for a smoother ride.

Take a close look of Internal combustion system: 1. Intake stroke – intake valve opens and the piston moves down allowing the fuel-air mix to enter the open space. 2. Compression stroke- the piston moves upwards. This compresses the fuel-air mix by forcing it into a smaller space. Compression makes the fuel-air mix explode with greater force. 3. Power cycle – spark from a spark plug ignites the fuel-air mix. The explosion forces the piston down the cylinder. 4. Exhaust cycle – the exhaust valve opens and the piston moves back to the top of the cylinder which forces the exhaust fumes out. As these all move together, the motor is able to continue turning and allowing for the car to move forward with the fuel that it is given. By the continuous turns and movements of the motor is the ability to push your car forward with some simple steps that allow for the wheels on your car to keep turning. The bottom of each piston is attached to the crankshaft.As the pistons are forced up and down they rotate the crankshaft, which after sending the power through the transmission, turns wheels.Most cars have at least four cylinders. More powerful cars have more. For example a V6 has six cylinders and a V8 has eight.The harder a driver presses on the accelerator pedal the more fuel-air mix is passed into the cylinders and the more power is produced.What Are Revolutions Per Minute? The four-stroke cycle repeats itself thousand of times a minute. These repetitions are more commonly known as Revs.A rev counter tells you how many thousand times per minute the cycle is repeated.

Two Stroke & Four Stroke Engines Stroke refers to the movement of piston in the engine. 2 stroke engines make piston move twice and 4 stroke engines piston moves four times in all the directions. Each movement of the piston (stroke) is characterized by a unique activity of compression of fuel and generation of power. So what exactly is the difference between 2 and 4 stroke engines? Let us find out: Piston moves only twice in a two stroke engine. The first movement is called the compression stroke and the second stroke is called the power stroke. Compression stroke: Compression stroke is an act of compressing fuel. During compression stroke piston goes up compressing the fuel in to the engine. 2. Power stroke: Compression stroke is followed by power stroke. During a power stroke the fuel is ignited, which pushes the piston down producing a lot of power and torque. It also involves in taking new fuel and air to start compression again. In a four stroke engine, the piston moves 4 times i.e. two sets of ups and downs. The movements are characterized as follows: Compression stroke:During this stroke fuel is compressed. The compressing is essential to generate power at the later stages. 2. Power stroke:The fuel ignites and moves the piston down. The act of moving down produces power and torque. 3. Exhaust Stroke:The piston again goes up and drives the power emitted and fuel out of the exhaust valve. 4. Intake stroke: The piston again goes down and draws new amount of fuel and air ready to compress again. Thus as you can notice that two stroke engines involves only two moves and 4 stroke moves 4 times. Basic Features: As a 2 stroke engine receives power stroke twice than that of four stroke engines they generate more power and torque. Also, 2 stroke engines are noisier when compared to four stroke engines.  2 stroke engines does all the act of exhausting and taking fuel in at a single stroke i.e. power stroke, it is more polluting.  2 stroke engines want more lubrication when compared to four stroke engines. One will have to keep the engine lubricated frequently (oiling) for smooth riding experience.  2 stroke engines are not suitable for long term as they tend to produce more noise and pollution simultaneously.  4 stroke engines are fuel efficient, smoother riding experience, less polluting and least noisy. 1. 1.

 4 stroke engines do not emit as much smoke as 2 stroke ones do. They also have a long term life. The Final Verdict: Though a two stroke engine emits more power and torque, they are not suited for the day to day activity. Moreover, they are not fuel efficient, have a short life, polluting agent and also noisier than 4 stroke ones. Therefore, 4 stoke engines should be preferred as they are more fuel efficient, less polluting, and affordable. 4 stroke bikes are ideal for day to day activities. Anti-lock braking system (ABS) Anti-lock braking system is an automotive safety system that allows the wheels on a motor vehicle to maintain tractive contact with the road surface according to driver inputs while braking preventing the wheels from locking up (ceasing rotation) and avoiding uncontrolled skidding. It is an automated system that uses the principles of threshold braking and cadence braking which were practiced by skillful drivers with previous generation braking systems. It does this at a much faster rate and with better control than a driver could manage. The theory behind anti-lock brakes is simple. A skidding wheel(where the tire contact patch is sliding relative to the road) has less traction than a non-skidding wheel. If you have been stuck on ice, you know that if your wheels are spinning you have no traction. This is because the contact patch is sliding relative to the ice. By keeping the wheels from skidding while you slow down, anti-lock brakes benefit you in two ways: You‟ll stop faster, and you‟ll be able to steer while you stop. There are four main components to an ABS system:     Speed sensors Pump Valves Controller

Speed Sensors
The anti-lock braking system needs some way of knowing when a wheel is about to lock up. The speed sensors, which are located at each wheel, or in some cases in the differential, provide this information.

Valves
There is a valve in the brake line of each brake controlled by the ABS. On some systems, the valve has three positions:    In position one, the valve is open; pressure from the master cylinder is passed right through to the brake. In position two, the valve blocks the line, isolating that brake from the master cylinder. This prevents the pressure from rising further should the driver push the brake pedal harder. In position three, the valve releases some of the pressure from the brake.

Pump
Since the valve is able to release pressure from the brakes, there has to be some way to put that pressure back. That is what the pump does; when a valve reduces the pressure in a line, the pump is there to get the pressure back up.

Controller
The controller is a computer in the car. It watches the speed sensors and controls the valves.

ABS at Work
There are many different variations and control algorithms for ABS systems. We will discuss how one of the simpler systems works.

The controller monitors the speed sensors at all times. It is looking for decelerations in the wheel that are out of the ordinary. Right before a wheel locks up, it will experience a rapid deceleration. If left unchecked, the wheel would stop much more quickly than any car could. It might take a car five seconds to stop from 60 mph (96.6 kph) under ideal conditions, but a wheel that locks up could stop spinning in less than a second. The ABS controller knows that such a rapid deceleration is impossible, so it reduces the pressure to that brake until it sees an acceleration, then it increases the pressure until it sees the deceleration again. It can do this very quickly, before the tire can actually significantly change speed. The result is that the tire slows down at the same rate as the car, with the brakes keeping the tires very near the point at which they will start to lock up. This gives the system maximum braking power. When the ABS system is in operation you will feel a pulsing in the brake pedal; this comes from the rapid opening and closing of the valves. Some ABS systems can cycle up to 15 times per second.

Car Engine working: Referred Documents: CAR COMPONENTS CIRCUITS
Share this:

  

Twitter Facebook

Like this:

By sudhakarmaradana
JUN

122012

CAN Calibration Protocol (CCP)
What is Calibration?

Calibration is a process of optimizing or tuning a control algorithm to get the desired response from the system. A calibration tool is a combination of a hardware interface and a software

application that enables the engineer to access the calibration variables in an ECU and change them. Typical control algorithm components that need calibration are look-up tables, gains, and constants. A powertrain control algorithm may have hundreds of calibrate-able parameters. The more parameters that are used, the more difficult the task of finding an optimal calibration. The calibration tool helps the engineer arrive at an acceptable calibration parameter set. All of the calibrate-able parameters are grouped into a special section of ECU memory called the calibration memory. Calibration tools give the user access to this memory to allow. A basic calibration system consists of an ECU interface, a link to the host PC, and a PC application. A more capable system will add a vehicle network link as well as analog data acquisition modules. The ECU interface is typically a CAN interface when a CAN-based calibration method is used, or a Read-Only-Memory (ROM) emulator when a direct memory access method is used. The link back to the host PC can be CAN, USB, Ethernet, or other method. The PC application is typically a MS-Windows® application. Figure 1 shows a typical calibration system.

What is CCP? The CCP (CAN Calibration Protocol) is, just as the name indicates, a protocol for calibration of and data acquisition from electronic control units (ECU). The protocol is defined by ASAM (Association for Standarisation of Automation- and Measuring Systems), earlier known as ASAP (Arbeitskreis zur Standardisierung von Applikationssystemen). This is an international organization consisting of a number of significant vehicle manufacturers i.e. Audi, BMW, VW etc. Until now different technical solutions have been used for developing, calibration, production and service of ECU hardware and software. The aim with CCP is to create a common tool for all stages of ECU developing and which is compatible with different kinds of hardware and software.

The ASAM group defines a lot of standards. The CCP and XCP standards are found in subsection AE (Automotive Electronics) and are grouped into something called AE MCD 1. The current version of the CCP specification is 2.1, released in February, 1999. The CAN Calibration Protocol is basically used as a monitor program. Similar to many earlier serial RS232-type monitors and bootstrap loaders that provide basic read and write memory capabilities, CCP provides the same functionality using a standard protocol rather than a company-specific proprietary protocol. However, when a highspeed CAN bus is used, CCP, unlike some previous 9600 baud UART-based monitors, provides the ability to access data at such a fast rate that it is possible to run an application at the same time. Developers now have a significant advantage over the earlier monitoring methods. In the dialog used by CCP and most monitor programs it is the tool or PC that is the Master that sends commands into the ECU. The ECU does not send information without the Master (Tool) initiating commands. A CCPcompliant tool can read data from the ECU and can write data into the ECU with the appropriate CCP messages. With CCP, the software developer can read: • RAM • PORTS • ROM • FLASH With CCP, the software developer can write to: • RAM • PORTS • FLASH However, this is only CAN Calibration Protocol‟s minimum capability. CCP includes several additional monitor commands, and provides several new features including automatic data acquisition processing based on events or periodic updating, flash programming and data security. Since there is no requirement to use all its features, CCP is a scalable protocol.

CCP users have access to online measurement data and can calibrate modules, so software development can occur not only in a lab environment but also during an in-vehicle test.

What are the functions of CCP? CCP is a application layer for CAN 2.0B (11- or 29-bits CAN id). The Protocol is a top layer (layer 7) according to the OSI model, which means that the protocol does not describe how bits and bytes are created but uses the CAN 2.0B protocol physical, data link and network layer. CCP supports the following functions:  Reads and writes to ECU memory.  Synchronous cyclic data acquisition from a ECU.  Simultaneous calibration and data acquisition.  Handles multiple nods on the CAN bus.  Flash programming.  Plug and play.  Protection of resources (data acquisition and calibration). CCP in detail CCP is built on a master/slave application where CCP-master starts communication by sending commands to a slave node. There can be several slave nodes connected on a CAN bus. CCP uses generic commands for data acquisition and simple memory handling for calibration. These two resources are independent and can therefore be used simultaneously.

Figure 1: CCP bus connection. CCP has been designed to handle the restrictions and demands of both small 8-bits microcontrollers and ECUs with high performance. No extra hardware has to be connected to the ECU. The CCP driver is fully implemented in the software. A simple implementation of CCP only needs a small part of the available RAM, ROM and CPU time for execution. A simple implementation only need two CAN message identifiers, which can be set as low priority that

does not disturb the ordinary traffic. If CCP is to be used from an ordinary PC, the same simple and low cost CAN interface which is used in microcontrollers can be used. Generic commands CCP uses generic commands, which are not node specific, to perform different functions in a slave node. As the commands are generic every node must have an individual station address. A logical connection between master and slave has to be set up before any commands can be sent. This connection persists until the master decides to connect to another slave node or until the master sends a disconnect command. After the connection the master controls all communication between master and salve. Every message from the master is followed by a reply message from the slave containing data or error codes. CCP-specific CAN messages CCP is built on the CAN 2.0B protocol. All messages are 8 bytes long. Only two types of CAN messages is needed, CRO and DTO, one for each direction. The CRO (Command Recieive Object) messages are sent from master to slave and contain control commands. DTO (Data Transmission Object) messages are sent from slave to master. When a slave has received a CRO message it performs the given instructions and then answers with a DTO message containing a CRM (Command Return Message). The CRM code tells the master if the corresponding control command has been performed as planned or not.

Figure 2: CRO and DTO messages The CAN identifiers used for the CRO and DTO messages are determined by a configuration file (“A2L file”, defined by the ASAM MCD 2MC/ASAP2 standard) which is used to configure the master. The configuration file may also contain information about the slave memory organization, which is useful for data acquisition and calibration. Since the CAN identifiers define the priority of the message it should be chosen in a way that does not disturb the ordinary traffic on the bus. CCP does not determine which byte order to use (Motorola or Intel) in general data transmission. There is an exception that states that the byte order for the station address used when establishing a connection between master and slave has to be Intel (LSB first). Description of CRO messages CRO messages are sent from master to slave and contain instructions. The first byte is a command code (CMD) which describes the purpose of the message. The second byte is a command counter (CTR) and is used for keeping track of the communication. The command counter is also expected to be sent in return in the DTO message from the slave. Bytes 2-7 are reserved for data parameters depending on the command code. A message is always 8 bytes long and bytes which are not defined are considered as do not care”.

Figure 3: Organization of CRO message. Description of DTO messages

The DTO message is sent by the slave as a receipt of a received CRO message and it is also used for data acquisition. The first byte in the message is called PID (Packet ID). The value of the PID describes the message type. There are three types of messages:    0xFF, command return message (CRM), if the DTO is sent as a receipt of a CRO message. 0xFE, event message, if the DTO reports internal slave status changes in order to invoke error recovery or other services. 0 – 0xFD, data acquisition message (DAQ). This PID has the value of an ODT (Object Descriptor Table), which is described later.

Figure 4: Organization of DTO message Data acquisition (DAQ) The master device can initiate and start data acquisition from the slave device. Data is sent from the slave with special DAQ-DTOs. The data bytes are organized in DAQ lists which consists of a number of ODT lists. An ODT list contains up to 7 pointers to memory addresses in the ECU where data is stored. Besides pointers to memory addresses a ODT-list can contain a address extension and the number of bytes to be sent. All slave devices don‟t handle data elements longer than one byte and it is up to the master to solve this task by splitting up the data into single bytes.

Figure 5: Organization of ODT list DAQ-DTO consists of a PID and the data elements of which the memory pointers in the ODT list points at. The PID number (usually the same as the ODT list) has a value between 0 and 253 which means that there can be only 254 ODT lists simultaneously.

Figure 6: Organization of DAQ lists The CCP specification allows several DAQ lists, which can be simultaneously active. Transmission of DAQ lists is initiated by the master through a START_STOP command. Data bytes in the ODT lists are sampled in the slave device and then sent on the CAN bus in DAQ-DTOs. If a new START_STOP command is received by the slave before the ongoing DAQ cycle is finished, there are two ways to react. The new DAQ command is started and the ongoing is terminated or the ongoing cycle is finished and the new is ignored. There are advantages and disadvantages with both methods and the CCP specification doesn‟t say which one to choose. Commands The following commands are included in the CCP specification (Figure 7). All commands don‟t have to be implemented if the ability for calibration isn‟t needed and are therefore marked as optional in the table below (Figure 7). Also GET_DAQ_SIZE, SET_DAQ_PTR, WRITE_DAQ, START_STOP (and START_STOP_ALL) are DAQ specific and don‟t have to be implemented unless this resource is used. GET_SEED and UNLOCK are used to unlock CCP resources like data acquisition and calibration if they are protected by a key (password), which is optional.

Figure 7: Commands. Error handling Depending on the error code from the slave and how critical it is, the master take different actions that are described in Figure 8.

Figure 8: Error handling. Error C0 is a warning and no action is taken. If error C1 occurs there is an error in the communication or the node sending it is busy. On error C1 the master should wait the ACK time, described in figure 7, and then try to resend the message. This should be done two times. Error C2 might be a temporary power loss and can be solved by a reinitialization. Error C3 is irresolvable and the master should terminate the running session. Cold Start” means that a new logical connection between master and slave is established with a CONNECT command and some further initialization. Example of sequences The examples describe commands to use, as master, for basic CCP communication. Login session (Cold Start) A typical connection between master and slave starts with a CRO containing a CONNECT command from the master. The slave should answer with a corresponding DTO. To make sure that both master and slave talks the same “language” a GET_CCP_VERSION command is sent from the master with t he expected version number. If the version number sent in return from the slave matches, communication can proceed. For “plug „n play” compatible nodes EXCHANGE_ID command can be used for automatic session configuration, depending on station address. On command GET_SEED the slave node returns information about protection status (locked/unlocked) of resources (DAQ or Calibration). If, for some reason, a resource is locked it has to be unlocked before it can be used. To unlock a resource an UNLOCK command, with a “key” received in the GET_SEED DTO, has to be sent from the master. Before the login session is ended initialization of status bits are recommended, this is done with SET_S_STATUS command. Calibration init session This session description assumes that a login procedure has been performed. The first thing then would be to set the session status bit for calibration to “off” with SET_S_STATUS command. Thereafter the memory address containing the data to exchange is selected with SET_MTA command. To ensure that this memory address is available a BUILD_CHKSUM command is sent and an answer from the slave node that confirms this is expected. Thereafter the data byte(s) can be downloaded to the selected address. First a DOWNLOAD command is sent with the number of data bytes and the value of each byte. To actually perform the data exchange the

SELECT_CAL_PAGE is sent. To indicate that calibration has started session status bit for calibration is set to “on” with SET_S_STATUS command. DAQ init session This session description assumes that a login procedure has been performed. The session status bit for DAQ is set to “off” with SET_S_STATUS. A DAQ list is selected with GET_DAQ_SIZE and the slave answers with the number of available ODTs. SET_DAQ_POINTER command selects which DAQ list, ODT table and element in ODT that should be written to. WRITE_DAQ command then assigns the memory address of the data parameter to the previous selected element. When all DAQ lists are filled as wanted the session status bit for DAQ is set to “on” using SET_S_STATUS. The transmission of a DAQ list is started with the START_STOP command. If several DAQ lists should be started at the same time and sent synchronously START_STOP_ALL command is used. Areas of use The most common area of use is in the automotive industry, where CAN is often used, but also in other industry where CAN is used. Conceivable areas of use include  ECU developing.  Systems for function and environmental tests of ECUs.  In a test bench for combustion engines, gear box or climate control.  For measurements and calibration in pre-production vehicles.  In a general CAN application outside vehicle industry. Click to view the CCP documents: 1. CCP_diag 2. CCP Version 2.1 3. CANape_CCP_Communication 4. Flash_Kernel__CCP__HC12_micro_controller 5. Integration_of the_Vector_CCP_Driver_with_a_free_CAN_Driver
Share this:

  

Twitter Facebook

Like this:

By sudhakarmaradana
JUN

82012

UDS – General vehicle diagnostics
UDS is defined (Unified Diagnostic Services) in the standards ISO 14229 contains (the bus system-independent part) and ISO 15765-3 (CAN describes the specific implementation). Unlike OBD writes the UDS standards applicable to the general vehicle diagnostic no CAN identifier and no CAN baud rates. Here, then, any vehicle manufacturer is able to decide freely. be called The Standard, however, define how the SIDs and PIDs (hot at UDS Sub-level parameters). Unlike the OBD content of the messages in UDS is not defined in practice. That is, everyone can vehicle manufacturers specify how it defines data, under what parameters on the responsibility, as it codes etc. The message structure of the UDS diagnostic services consistent with the structure of OBD: The first byte is the SID. Then a detail of the service follows the so-called sub-level identifiers (essentially corresponds to the PID in OBD). At UDS, there is the possibility to give positive response to messages. For this, the diagnostic tester must be in the request, the top bit set of sub-level identifiers to 1. Negative responses on the other hand must always be sent. This absence of positive response messages is useful for example, to reduce the bus load in flashing.

There are a large number of UDS diagnostic services. The following two tables show an extract of it here. In the tables for comparison, the respective services for the predecessor of UDS, KWP 2000 are shown.

Extract the diagnostic services of UDS and KWP 2000 UDS is defined (Unified Diagnostic Services) in the standards ISO 14229 contains (the bus system-independent part) and ISO 15765-3 (CAN describes the specific implementation). Unlike OBD writes the UDS standards applicable to the general vehicle diagnostic no CAN identifier and no CAN baud rates. Here, then, any vehicle manufacturer is able to decide freely. be called The Standard, however, define how the SIDs and PIDs (hot at UDS Sub-level parameters). Unlike the OBD content of the messages in UDS is not defined in practice. That is, everyone can vehicle manufacturers specify how it defines data, under what parameters on the responsibility, as it codes etc.

The message structure of the UDS diagnostic services consistent with the structure of OBD: The first byte is the SID. Then a detail of the service follows the so-called sub-level identifiers (essentially corresponds to the PID in OBD). At UDS, there is the possibility to give positive response to messages. For this, the diagnostic tester must be in the request, the top bit set of sub-level identifiers to 1. Negative responses on the other hand must always be sent. This absence of positive response messages is useful for example, to reduce the bus load in flashing. There are a large number of UDS diagnostic services. The following two tables show an extract of it here. In the tables for comparison, the respective services for the predecessor of UDS, KWP 2000 are shown.

Example: Flash Programming I
As a practical example of the UDS diagnostic services, let us consider the typical structure of a flash programming, as illustrated. The diagnostic tester sends a Growl ReadDataByIdentifier. With this request, it reads the hardware ID and software ID from the controller to see which device it exactly right. Then, the diagnostic tester, the control unit switches to a special diagnostic session. Not the actual diagnosis session for the program, but a session in which there are a number of advanced services available. This is done with the diagnostic service diagnosticSession control. This advanced diagnostic session asks the diagnostic tester, the control unit whether the preconditions are met for flash programming. This is typically that the programming can be done only when the vehicle that the engine must be made, etc.

Basic sequence of a flash programming Then, the diagnostic tester usually with the service Communication Control the fault memory and off the bus communication in other controllers. This advanced diagnostic session has served its purpose. Now the diagnostic tester on the diagnostic service diagnosticSession control to the programming session. At least now is a SecurityAccess necessary. Thereafter, the diagnostic tester usually sends the so-called fingerprint of the control unit. This is information that is stored in the ECU memory permanently, to indicate that programming. It typically

has a workshop identifier in the memory of the controller is written, can be enacted so that afterwards who has reprogrammed the ECU. Before the flash memory can be reprogrammed, it must be deleted. This is achieved by calling a routine in the ECU memory of the diagnostic service routine control done. Thereafter, the service request download the actual programming operation is initiated. This service allows the controller will also be notified of which are loaded into memory the data and how much data can be expected. Now starts the actual download of data in a loop with the service transfer data. The storage area is transmitted here in blocks. At the end of the diagnostic tester says the control device transfer exit Now that all data has been transferred. After examining the data transmitted in the control unit now takes place, the actual flash process. Typically, the programming operation will take some time. During this time the controller is not able to process requests from the tester. Therefore, the control unit the service transfer exitusually with a negative response and the error code ResponsePending . Reply Only when the programming is completed, the controller sends a positive confirmation transfer exit. Then examine the diagnostic tester, whether programming was successful, he routine control a routine in the control unit is activated, which checks the memory. Thereafter, a further call to routine controldifferent n dependence of the flash programming examined, such as whether the software or the corresponding record must be programmed. The download process is completed the controller, the controller is normally ECUReset reset. The controller will reboot and goes to normal operation, so back to the default diagnostic session. In order for the other ECUs in the vehicle also restore the status quo, will Communication Control the normal bus communication again and the fault memory in the other control devices is turned on again. Hence the download process is complete.

Share this:

  

Twitter Facebook

Like this:

By sudhakarmaradana
1
JUN

72012

Memory Map in C
A typical memory representation of C program consists of following sections. 1. Text segment 2. Initialized data segment 3. Uninitialized data segment 4. Stack 5. Heap A typical memory layout of a running process 1. Text Segment: A text segment , also known as a code segment or simply as text, is one of the sections of a program in an object file or in memory, which contains executable instructions. As a memory region, a text segment may be placed below the heap or stack in order to prevent heaps and stack overflows from overwriting it.

Usually, the text segment is sharable so that only a single copy needs to be in memory for frequently executed programs, such as text editors, the C compiler, the shells, and so on. Also, the text segment is often read-only, to prevent a program from accidentally modifying its instructions. 2. Initialized Data Segment: Initialized data segment, usually called simply the Data Segment. A data segment is a portion of virtual address space of a program, which contains the global variables and static variables that are initialized by the programmer. Note that, data segment is not read-only, since the values of the variables can be altered at run time. This segment can be further classified into initialized read-only area and initialized read-write area. For instance the global string defined by char s[] = “hello world” in C and a C statement like int debug=1 outside the main (i.e. global) would be stored in initialized read-write area. And a global C statement like const char* string = “hello world” makes the string literal “hello world” to be stored in initialized read -only area and the character pointer variable string in initialized read-write area. Ex: static int i = 10 will be stored in data segment and global int i = 10 will also be stored in data segment 3. Uninitialized Data Segment: Uninitialized data segment, often called the “bss” (Block Started by Symbol) segment, named after an ancient assembler operator that stood for “block started by symbol.” Data in this segment is ini tialized by the kernel to arithmetic 0 before the program starts executing uninitialized data starts at the end of the data segment and contains all global variables and static variables that are initialized to zero or do not have explicit initialization in source code. For instance a variable declared static int i; would be contained in the BSS segment. For instance a global variable declared int j; would be contained in the BSS segment. 4. Stack: The stack area traditionally adjoined the heap area and grew the opposite direction; when the stack pointer met the heap pointer, free memory was exhausted. (With modern large address spaces and virtual memory techniques they may be placed almost anywhere, but they still typically grow opposite directions.) The stack area contains the program stack, a LIFO structure, typically located in the higher parts of memory. On the standard PC x86 computer architecture it grows toward address zero; on some other architectures it grows the opposite direction. A “stack pointer” register tracks the top of the stack; it is adjusted each time a value is “pushed” onto the stack. The set of values pushed for one function call is termed a “stack frame”; A stack frame consists at minimum of a return address. Stack, where automatic variables are stored, along with information that is saved each time a function is called. Each time a function is called, the address of where to return to and certain information about the caller‟s environment, such as some of the machine registers, are saved on the stack. The newly called function then allocates room on the stack for its automatic and temporary variables. This is how recursive functions in C can work. Each time a recursive function calls itself, a new stack frame is used, so one set of vari ables doesn‟t interfere with the variables from another instance of the function. 5. Heap: Heap is the segment where dynamic memory allocation usually takes place. The heap area begins at the end of the BSS segment and grows to larger addresses from there.The Heap area is managed by malloc, realloc, and free, which may use the brk and sbrk system calls to adjust its size (note that the use of brk/sbrk and a single “heap area” is not required to fulfill the contract of malloc/realloc/free; they may also be

implemented using mmap to reserve potentially non-contiguous regions of virtual memory into the process‟ virtual address space). The Heap area is shared by all shared libraries and dynamically loaded modules in a process. Examples. The size(1) command reports the sizes (in bytes) of the text, data, and bss segments. ( for more details please refer man page of size(1) ) 1. Check the following simple C program #include <stdio.h>int main(void){return 0;} [narendra@CentOS]$ gcc memory-layout.c -o memory-layout [narendra@CentOS]$ size memory-layout text 960 data 248 bss 8 dec 1216 hex 4c0 filename memory-layout

2. Let us add one global variable in program, now check the size of bss (highlighted in red color). #include <stdio.h>int global; /* Uninitialized variable stored in bss*/int main(void){return 0;} [narendra@CentOS]$ gcc memory-layout.c -o memory-layout [narendra@CentOS]$ size memory-layout text data bss dec hex filename

960 248 12 1220 4c4 memory-layout 3. Let us add one static variable which is also stored in bss. #include <stdio.h>int global; /* Uninitialized variable stored in bss*/int main(void){static int i; /* Uninitialized static variable stored in bss */return 0;} [narendra@CentOS]$ gcc memory-layout.c -o memory-layout [narendra@CentOS]$ size memory-layout text data bss dec hex filename

960 248 16 1224 4c8 memory-layout 4. Let us initialize the static variable which will then be stored in Data Segment (DS) #include <stdio.h>int global; /* Uninitialized variable stored in bss*/int main(void){static int i = 100; /* Initialized static variable stored in DS*/return 0;} [narendra@CentOS]$ gcc memory-layout.c -o memory-layout

[narendra@CentOS]$ size memory-layout text data bss dec hex filename

960 252 12 1224 4c8 memory-layout 5. Let us initialize the global variable which will then be stored in Data Segment (DS) #include <stdio.h>int global = 10; /* initialized global variable stored in DS*/int main(void){static int i = 100; /* Initialized static variable stored in DS*/return 0;} [narendra@CentOS]$ gcc memory-layout.c -o memory-layout [narendra@CentOS]$ size memory-layout text data bss dec hex filename

960 256 8 1224 4c8 memory-layout 1. Where are global local static extern variables stored?  Local Variables are stored in Stack. Register variables are stored in Register. Global & static variables are stored in data segment (BSS). The memory created dynamically are stored in Heap and the C program instructions get stored in code segment and the extern variables also stored in data segment 2. What does BSS Segment store?  BSS segment stores the uninitialized global and static variables and initializes them to zero. I read that BSS segment doesn‟t consume memory, then where those it stor e these variables? You probably read that the BSS segment doesn‟t consume space in the executable file on disk. When the executable loaded, the BSS segment certainly does consume space in memory. Space is allocated and initialized to zero by the OS loader 3. Global variable and Local variable Global variables once declared they can be used anywhere in the program i.e. even in many functions. If possible u can use the global variables in the different user defined header files as like in packages in java. On the other hand global variables values can be changed programmatically local variables are local to a functional and can‟t be used beyond that function. 4. Static variable and Global variable?  Static variables once declared they remain the sa me in the entire program and those values can‟t be changed programmatically. global variables: check above description

ASSEMBLER, LINKER AND LOADER:
Normally the C‟s program building process involves four stages and utilizes different „tools‟ such as a preprocessor, compiler, assembler, and linker.  At the end there should be a single executable file. Below are the stages that happen in order regardless of the operating system/compiler and graphically illustrated in Figure w.1. 1. Preprocessing is the first pass of any C compilation. It processes include-files, conditional compilation instructions and macros.

2.

Compilation is the second pass. It takes the output of the

preprocessor, and the source code, and generates assembler source code. 3. Assembly is the third stage of compilation. It takes the assembly source code and produces an assembly listing with offsets. The assembler output is stored in an object file. 4. Linking is the final stage of compilation. It takes one or more object files or libraries as input and combines them to produce a single (usually executable) file. In doing so, it resolves references to external symbols, assigns final addresses to procedures/functions and variables, and revises code and data to reflect new addresses (a process called relocation).  Bear in mind that if you use the IDE type compilers, these processes quite transparent.  Now we are going to examine more details about the process that happen before and after the linking stage. For any given input file, the file name suffix (file extension) determines what kind of compilation is done and the example for GCC is listed in Table w.1.  In UNIX/Linux, the executable or binary file doesn‟t have extension whereas in Windows the executables for example may have .exe, .com and .dll. File extension Description C source code which must be file_name.c preprocessed. C source code which should not be file_name.i preprocessed. C++ source code which should not be file_name.ii preprocessed. file_name.h C header file (not to be compiled or lin C++ source code which must be preprocessed. For file_name.cxx, the xx
must both be literally

file_name.ccfile_name.cpfile_name.Ccharacter x and file_name.C, is capital c. file_name.s Assembler code. Assembler code which must be file_name.S preprocessed. Object file by default, the object file name for a source file is made by replacing the extension .c,.i, .s etc with .o file_name.o Table w.1



The following Figure shows the steps involved in the process of building the C program starting from the compilation until the loading of the executable image into the memory for program running.

Figure w.1: Compile, link & execute stages for running program W.2 OBJECT FILES and EXECUTABLE  After the source code has been assembled, it will produce an Object files (e.g..o, .obj) and then linked, producing an executable files.  An object and executable come in several formats such as ELF (Executable and Linking Format) and COFF (Common ObjectFile Format). For example, ELF is used on Linux systems, while COFF is used on Windows systems.  Other object file formats are listed in the following Table. Object File Format Description The a.out format is the original file format for Unix. It consists of three
sections: text, data, and bss, which are for program code, initialized data, and uninitialized data, respectively. This format is so simple that it doesn‟t have any reserved place for debugging information. The only debugging format for a.out is stabs, which is encoded as a set of

a.out

normal symbols with distinctive attributes. The COFF (Common Object File Format) format was introduced with System V Release 3 (SVR3) Unix. COFF files may have multiple sections, each prefixed by a header. The number of sections is limited. The COFF specification includes support for debugging but the debugging information was COFF limited. There is no file extension for this format. A variant of COFF. ECOFF is an Extended COFF originally ECOFF introduced for Mips and Alpha workstations. The IBM RS/6000 running AIX uses an object file format called XCOFF (eXtended COFF). The COFF sections, symbols, and line numbers are used, but debugging symbols are dbx-style stabs whose strings are located in the .debug section (rather than the string table). The default name for an XCOFF executable file is XCOFF a.out. Windows 9x and NT use the PE (Portable Executable) format for their executables. PE is basically COFF with additional headers. The extension normally .exe. PE The ELF (Executable and Linking Format) format came with System V Release 4 (SVR4) Unix. ELF is similar to COFF in being organized into a number of sections, but it removes many of COFF‟s limitations. ELF used on most modern Unix systems, including GNU/Linux, Solaris and Irix. Also used on ELF many embedded systems. SOM (System Object Module) and ESOM (Extended SOM) is HP‟s object file and debug format (not to be confused with IBM‟s SOM, which is a cross-language Application Binary SOM/ESOMInterface – ABI). Table w.2  When we examine the content of these object files there are areas called sections. Sections can hold executable code, data, dynamic linking information, debugging data, symbol tables, relocation information, comments, string tables, and notes.  Some sections are loaded into the process image and some provide information needed in the building of a process image while still others are used only in linking object files.  There are several sections that are common to all executable formats (may be named differently, depending on the compiler/linker) as listed below: Section Description .text This section contains the executable instruction codes and

is shared among every process running the same binary. This section usually has READ and EXECUTE permissions only. This section is the one most affected by optimization. BSS stands for „Block Started by Symbol‟. It holds uninitialized global and static variables. Since the BSS only holds variables that don‟t have any values yet, it doesn‟t actually need to store the image of these variables. The size that BSS will require at runtime is recorded in the object file, but the BSS (unlike the data section) doesn‟t take up any .bss actual space in the object file. Contains the initialized global and static variables and their values. It is usually the largest part of the executable. It .data usually has READ/WRITE permissions. Also known as .rodata (read-only data) section. This contains .rdata constants and string literals. Stores the information required for relocating the image .reloc while loading. A symbol is basically a name and an address. Symbol table holds information needed to locate and relocate a program‟s symbolic definitions and references. A symbol table index is a subscript into this array. Index 0 both designates the first entry in the table and serves as the undefined symbol index. The symbol table contains an array of symbol Symbol table entries. Relocation is the process of connecting symbolic references with symbolic definitions. For example, when a program calls a function, the associated call instruction must transfer control to the proper destination address at execution. Relocatable files must have relocation entries‟ which are necessary because they contain information that describes how to modify their section contents, thus allowing executable and shared object files to hold the right information for a process‟s program image. Simply said Relocation relocation records are information used by the linker to records adjust section contents. Table w.3: Segments in executable file  The following is an example of the object file content dumping using readelf program. Other utility can be used is objdump. /* testprog1.c */ #include <stdio.h> static void display(int i, int *ptr);

int main(void) { int x = 5; int *xptr = &x; printf(“In main() program:\n”); printf(“x value is %d and is stored at address %p.\n”, x, &x); printf(“xptr pointer points to address %p which holds a value of %d.\n”, xptr, *xptr); display(x, xptr); return 0; } void display(int y, int *yptr) { char var[7] = “ABCDEF”; printf(“In display() function:\n”); printf(“y value is %d and is stored at address %p.\n”, y, &y); printf(“yptr pointer points to address %p which holds a value of %d.\n”, yptr, *yptr); } [bodo@bakawali test]$ gcc -c testprog1.c [bodo@bakawali test]$ readelf -a testprog1.o ELF Header: Magic: 7f 45 4c 46 01 01 01 00 00 00 00 00 00 00 00 00 Class: ELF32 Data: 2′s complement, little endian Version: 1 (current) OS/ABI: UNIX – System V ABI Version: 0 Type: REL (Relocatable file) Machine: Intel 80386 Version: 0×1 Entry point address: 0×0 Start of program headers: 0 (bytes into file) Start of section headers: 672 (bytes into file) Flags: 0×0 Size of this header: 52 (bytes) Size of program headers: 0 (bytes) Number of program headers: 0 Size of section headers: 40 (bytes)

Number of section headers: 11 Section header string table index:

8

Section Headers: [Nr] Name Type Addr Off Size ES Flg Lk Inf Al [ 0] NULL 00000000 000000 000000 00 0 0 0 [ 1] .text PROGBITS 00000000 000034 0000de 00 AX 0 0 4 [ 2] .rel.text REL 00000000 00052c 000068 08 9 1 4 [ 3] .data PROGBIT 00000000 000114 000000 00 WA 0 0 4 [ 4] .bss NOBIT 00000000 000114 000000 00 WA 0 0 4 [ 5] .rodata PROGBITS 00000000 000114 00010a 00 A 0 0 4 [ 6] .note.GNU-stack PROGBITS 00000000 00021e 000000 00 0 0 1 [ 7] .comment PROGBITS 00000000 00021e 000031 00 0 0 1 [ 8] .shstrtab STRTAB 00000000 00024f 000051 00 0 0 1 [ 9] .symtab SYMTAB 00000000 000458 0000b0 10 10 9 4 [10] .strtab STRTAB 00000000 000508 000021 00 0 0 1 Key to Flags: W (write), A (alloc), X (execute), M (merge), S (strings) I (info), L (link order), G (group), x (unknown) O (extra OS processing required) o (OS specific), p (processor specific) There are no program headers in this file. Relocation section „.rel.text‟ at offset 0x52c contains 13 entries: Offset Info Type Sym.Value Sym. Name 0000002d 00000501 R_386_32 00000000 .rodata 00000032 00000a02 R_386_PC32 00000000 printf 00000044 00000501 R_386_32 00000000 .rodata 00000049 00000a02 R_386_PC32 00000000 printf 0000005c 00000501 R_386_32 00000000 .rodata 00000061 00000a02 R_386_PC32 00000000 printf 0000008c 00000501 R_386_32 00000000 .rodata 0000009c 00000501 R_386_32 00000000 .rodata 000000a1 00000a02 R_386_PC32 00000000 printf 000000b3 00000501 R_386_32 00000000 .rodata 000000b8 00000a02 R_386_PC32 00000000 printf 000000cb 00000501 R_386_32 00000000 .rodata 000000d0 00000a02 R_386_PC32 00000000 printf There are no unwind sections in this file.

Symbol table „.symtab‟ contains 11 entries: Num: Value Size Type Bind Vis Ndx Name 0: 00000000 0 NOTYPE LOCAL DEFAULT UND 1: 00000000 0 FILE LOCAL DEFAULT ABS testprog1.c 2: 00000000 0 SECTION LOCAL DEFAULT 1 3: 00000000 0 SECTION LOCAL DEFAULT 3 4: 00000000 0 SECTION LOCAL DEFAULT 4 5: 00000000 0 SECTION LOCAL DEFAULT 5 6: 00000080 94 FUNC LOCAL DEFAULT 1 display 7: 00000000 0 SECTION LOCAL DEFAULT 6 8: 00000000 0 SECTION LOCAL DEFAULT 7 9: 00000000 128 FUNC GLOBAL DEFAULT 1 main 10: 00000000 0 NOTYPE GLOBAL DEFAULT UND printf No version information found in this file.  When writing a program using the assembly language it should be compatible with the sections in the assembler directives (x86) and the partial list that is interested to us is listed below: Section Description Contain code (instructions).Contain the_start label. 1 Text (.section .text) 2 Read-Only Data (.section .rodata) Contains pre-initialized constants. 3 Read-Write Data (.section .data) Contains pre-initialized variables. 4 BSS (.section .bss) Contains un-initialized data. Table w.4  The assembler directives in assembly programming can be used to identify code and data sections, allocate/initialize memory and making symbols externally visible or invisible.  An example of the assembly code with some of the assembler directives (Intel) is shown below: ;initializing data .section .data x: .byte 128 ;one byte initialized to 128 y: .long 1,1000,10000 ;3 long words ;initializing ascii data .ascii “hello“ ;ascii without null character asciz “hello“ ;ascii with ;allocating memory in bss .section .bss .equ BUFFSIZE 1024 ;define a constant .comm z, 4, 4 ;allocate 4 bytes for x with;4-byte alignment

;making symbols externally visible .section .data .globl w ;declare externally visible;e.g: int w = 10 .text .globl fool ;e.g: fool(void) {…} fool: … leave return W.3 RELOCATION RECORDS  Because the various object files will include references to each others code and/or data, so various locations, these shall need to be combined during the link time.  For example in Figure w.2, the object file that has main() includes calls to functions funct() and printf().  After linking all of the object files together, the linker uses the relocation records to find all of the addresses that need to be filled in. W.4 SYMBOL TABLE  Since assembling to machine code removes all traces of labels from the code, the object file format has to keep these around in different places.  It is accomplished by the symbol table that contains a list of names and their corresponding offsets in the text and data segments.  A disassembler provides support for translating back from an object file or executable.

Figure w.2: The relocation record W.5 LINKING



The linker actually enables separate compilation. As shown in Figure w.3, an executable can be made up of a number of source files which can be compiled and assembled into their object files respectively,

independently. Figure w.3: The object files linking process W.5.1 SHARED OBJECTS  In a typical system, a number of programs will be running. Each program relies on a number of functions, some of which will be standard C library functions, likeprintf(), malloc(), strcpy(), etc. and some are nonstandard or user defined functions.  If every program uses the standard C library, it means that each program would normally have a unique copy of this particular library present within it. Unfortunately, this result in wasted resources, degrade the efficiency and performance.  Since the C library is common, it is better to have each program reference the common, one instance of that library, instead of having each program contain a copy of the library.  This is implemented during the linking process where some of the objects are linked during the link time whereas some done during the run time (deferred/dynamic linking). W.5.2 STATICALLY LINKED  The term „statically linked‟ means that the program and the particular library that it‟s linked against are combined together by the linker at link time.  This means that the binding between the program and the particular library is fixed and known at link time before the program run. It also means that we can‟t change this binding, unless we re-link the program with a new version of the library.  Programs that are linked statically are linked against archives of objects (libraries) that typically have the extension of .a. An example of such a collection of objects is the standard C library, libc.a.  You might consider linking a program statically for example, in cases where you weren‟t sure whether the correct version of a library will be available at runtime, or if you were testing a new version of a library that you don‟t yet want to install as shared.

For gcc, the –static option can be used during the compilation/linking of the program. gcc –static filename.c –o filename  The drawback of this technique is that the executable is quite big in size, all the needed information need to be brought together. W.5.3 DYNAMICALLY LINKED  The term „dynamically linked‟ means that the program and the particular library it references are not combined together by the linker at link time.  Instead, the linker places information into the executable that tells the loader which shared object module the code is in and which runtime linker should be used to find and bind the references.  This means that the binding between the program and the shared object is done at runtime that is before the program starts, the appropriate shared objects are found and bound.  This type of program is called a partially bound executable, because it isn‟t fully resolved. The linker, at link time, didn‟t cause all the referenced symbols in the program to be associated with specific code from the library.  Instead, the linker simply said something like: “This program calls some functions within a particular shared object, so I‟ll just make a note of which shared object these functions are in, and continue on”.  Symbols for the shared objects are only verified for their validity to ensure that they do exist somewhere and are not yet combined into the program.  The linker stores in the executable program, the locations of the external libraries where it found the missing symbols. Effectively, this defers the binding until runtime.  Programs that are linked dynamically are linked against shared objects that have the extension .so. An example of such an object is the shared object version of the standard C library, libc.so.  The advantageous to defer some of the objects/modules during the static linking step until they are finally needed (during the run time) includes: 1. Program files (on disk) become much smaller because they need not hold all necessary text and data segments information. It is very useful for portability. 2. Standard libraries may be upgraded or patched without every one program need to be re-linked. This clearly requires some agreed module-naming convention that enables the dynamic linker to find the newest, installed module such as some version specification. Furthermore the distribution of the libraries is in


binary form (no source), including dynamically linked libraries (DLLs) and when you change your program you only have to recompile the file that was changed. 3. Software vendors need only provide the related libraries module required. Additional runtime linking functions allow such programs to programmatically-link the required modules only. 4. In combination with virtual memory, dynamic linking permits two or more processes to share read-only executable modules such as standard C libraries. Using this technique, only one copy of a module needs be resident in memory at any time, and multiple processes, each can executes this shared code (read only). This results in a considerable memory saving, although demands an efficient swapping policy. W.6 HOW SHARED OBJECTS ARE USED  To understand how a program makes use of shared objects, let‟s first examine the format of an executable and the steps that occur when the program starts. W.6.1 SOME ELF FORMAT DETAILS  Executable and Linking Format (ELF) is binary format, which is used in SVR4 Unix and Linux systems.  It is a format for storing programs or fragments of programs on disk, created as a result of compiling and linking.  ELF not only simplifies the task of making shared libraries, but also enhances dynamic loading of modules at runtime. W.6.2 ELF SECTIONS  The Executable and Linking Format used by GNU/Linux and other operating systems, defines a number of „sections‟ in an executable program.  These sections are used to provide instruction to the binary file and allowing inspection. Important function sections include the Global Offset Table (GOT), which stores addresses of system functions, the Procedure Linking Table(PLT), which stores indirect links to the GOT, .init/.fini, for internal initialization and shutdown, .ctors/.dtors, for constructors and destructors.  The data sections are .rodata, for read only data, .data for initialized data, and.bss for uninitialized data.  Partial list of the ELF sections are organized as follows (from low to high): 1. .init – Startup 2. .text – String 3. .fini – Shutdown

 

– Read Only – Initialized Data 6. – Initialized Thread Data 7. – Uninitialized Thread Data 8. – Constructors 9. – Destructors 10. – Global Offset Table 11. – Uninitialized Data You can use the readelf or objdump program against the object or executable files in order to view the sections. In the following Figure, two views of an ELF file are shown: the linking view and the execution view.
4. 5.

.rodata .data .tdata .tbss .ctors .dtors .got .bss

Figure w.4: Simplified object file format: linking view and execution view.  Keep in mind that the full format of the ELF contains many more items. As explained previously, the linking view, which is used when the program or library is linked, deals with sections within an object file.  Sections contain the bulk of the object file information: data, instructions, relocation information, symbols, debugging information, etc.  The execution view, which is used when the program runs, deals with segments. Segments are a way of grouping related sections.  For example, the text segment groups executable code, the data segment groups the program data, and the dynamic segment groups information relevant to dynamic loading.  Each segment consists of one or more sections. A process image is created by loading and interpreting segments.  The operating system logically copies a file‟s segment to a virtual memory segment according to the information provided in the program header table. The OS can also use segments to create a shared memory resource.  At link time, the program or library is built by merging together sections with similar attributes into segments.  Typically, all the executable and read-only data sections are combined into a single text segment, while the data and BSS are combined into the data segment.

These segments are normally called load segments, because they need to be loaded in memory at process creation. Other sections such as symbol information and debugging sections are merged into other, nonload segments. W.7 PROCESS LOADING  In Linux processes loaded from a file system (using either the execve() or spawn()system calls) are in ELF format.  If the file system is on a block-oriented device, the code and data are loaded into main memory.  If the file system is memory mapped (e.g. ROM/Flash image), the code needn‟t be loaded into RAM, but may be executed in place.  This approach makes all RAM available for data and stack, leaving the code in ROM or Flash. In all cases, if the same process is loaded more than once, its code will be shared.  Before we can run an executable, firstly we have to load it into memory.  This is done by the loader, which is generally part of the operating system. The loader does the following things (from other things): 1. Memory and access validation – Firstly, the OS system kernel reads in the program file’s header information and does the validation for type, access permissions, memory requirement and its ability to run its instructions. It confirms that file is an executable image and calculates memory requirements. 2. Process setup includes: 1. Allocates primary memory for the program’s execution. 2. Copies address space from secondary to primary memory. 3. Copies the .text and .data sections from the executable into primary memory. 4. Copies program arguments (e.g., command line arguments) onto the stack. 5. Initializes registers: sets the esp (stack pointer) to point to top of stack, clears the rest. 6. Jumps to start routine, which: copies main()‘s arguments off of the stack, and jumps to main().  Address space is memory space that contains program code, stack, and data segments or in other word, all data the program uses as it runs.  The memory layout, consists of three segments (text, data, and stack), in simplified form is shown in Figure w.5.  The dynamic data segment is also referred to as the heap, the place dynamically allocated memory (such as from malloc() and new) comes from. Dynamically allocated memory is memory allocated at run time instead of compile/link time.




This organization enables any division of the dynamically allocated memory between the heap (explicitly) and the stack (implicitly). This explains why the stack grows downward and heap grows upward.

Figure w.4: Process memory layout W.8 RUNTIME DATA STRUCTURE – From Sections to Segments  A process is a running program. This means that the operating system has loaded the executable file for the program into memory, has arranged it to have access to its command-line arguments and environment variables, and has started it running.  Typically a process has 5 different areas of memory allocated to it as listed in Table w.5 (refer to Figure w.4): Segment Description Often referred to as the text segment, this is the area in which the executable instructions reside. For example, Linux/Unix arranges things so that multiple running instances of the same program share their code if possible. Only one copy of the instructions for the same program resides in memory at any time. The portion of the executable file Code – text segment containing the text segment is the text section. Statically allocated and global data that areinitialized with nonzero values live in the data segment. Each process running the same program has its own data segment. The portion of the executable file containing the Initialized data – data segmentdata segment is the data section. BSS stands for „Block Started by Symbol‟. Global and statically allocated data that initialized to zero by default are kept in what Uninitialized data – is called the BSS area of the process. Each bss segment process running the same program has its

own BSS area. When running, the BSS data are placed in the data segment. In the executable file, they are stored in the BSS section. For Linux/Unix the format of an executable, only variables that are initialized to a nonzero value occupy space in the executable‟s disk file. The heap is where dynamic memory (obtained bymalloc(), calloc(), realloc() and new for
C++) comes from. Everything on a heap is anonymous, thus you can only access parts of it through a pointer. As memory is allocated on the heap, the process‟s address space grows. Although it is possible to give memory back to the system and shrink a process‟s address space, this is almost never done because it will be allocated to other process again. Freed memory (free() and delete) goes back to the heap, creating what is called holes. It is typical for the heap to grow upward. This means that successive items that are added to the heap are added at addresses that are numerically greater than previous items. It is also typical for the heap to start immediately after the BSS area of the data segment. The end of the heap is marked by a pointer known as the break. You cannot reference past the break. You can, however, move the break pointer (via brk() and sbrk() system calls) to a Heap

new position to increase the amount of heap memory available. The stack segment is where local (automatic) variables are allocated. In C program, local variables are all variables declared inside the opening left curly brace of a function body including the main() or other left curly brace that
aren‟t defined as static. The data is popped up or pushed into the stack following the Last In First Out (LIFO) rule. The stack holds local variables, temporary information, function parameters, return address and the like. When a function is called, a stack frame(or a procedure activation record) is created andPUSHed onto the top of the stack. This stack frame contains information such as the address from which the function was called and

Stack

where to jump back to when the function is finished (return address), parameters, local variables, and any other information needed by the invoked function. The order of the information may vary by system and compiler. When a function returns, the stack frame is POPped from the stack. Typically the stack grows downward,

meaning that items deeper in the call chain are at numerically lower addresses and toward the heap. Table w.5  When a program is running, the initialized data, BSS and heap areas are usually placed into a single contiguous area called a data segment.  The stack segment and code segment are separate from the data segment and from each other as illustrated in Figure w.4.  Although it is theoretically possible for the stack and heap to grow into each other, the operating system prevents that event.  The relationship among the different sections/segments is summarized in Table w.6, executable program segments and their locations. Executable file section (disk file) Address space segment Program memory segment .text Text Code .data Data Initialized data .bss Data BSS Data Heap Stack Stack Table w.6 W.9 THE PROCESS (IMAGE)  The diagram below shows the memory layout of a typical C‟s process. The process load segments (corresponding to “text” and “data” in the diagram) at the process‟s base address.  The main stack is located just below and grows downwards. Any additional threads or function calls that are created will have their own stacks, located below the main stack.  Each of the stack frames is separated by a guard page to detect stack overflows among stacks frame. The heap is located above the process and grows upwards.  In the middle of the process‟s address space, there is a region is reserved for shared objects. When a new process is created, the process manager first maps the two segments from the executable into memory.





It then decodes the program‟s ELF header. If the program header indicates that the executable was linked against a shared library, the process manager will extract the name of the dynamic interpreter from the program header. The dynamic interpreter points to a shared library that contains the runtime linker code. The process manager will load this shared library in memory and will then pass control to the runtime linker code in this library.

Figure w.5: C‟s process memory layout on an x86. W.10 RUNTIME LINKER AND SHARED LIBRARY LOADING  The runtime linker is invoked when a program that was linked against a shared object is started or when a program requests that a shared object be dynamically loaded.  So the resolution of the symbols can be done at one of the following time: 1. Load-time dynamic linking – the application program is read from the disk (disk file) into memory and unresolved references are located. The load time loader finds all necessary external symbols and alters all references to each symbol (all previously

zeroed) to memory references relative to the beginning of the program. 2. Run-time dynamic linking – the application program is read from disk (disk file) into memory and unresolved references are left as invalid (typically zero). The first access of an invalid, unresolved, reference results in a software trap. The run-time dynamic linker determines why this trap occurred and seeks the necessary external symbol. Only this symbol is loaded into memory and linked into the calling program.  The runtime linker is contained within the C runtime library. The runtime linker performs several tasks when loading a shared library (.so file).  The dynamic section provides information to the linker about other libraries that this library was linked against.  It also gives information about the relocations that need to be applied and the external symbols that need to be resolved. The runtime linker will first load any other required shared libraries (which may themselves reference other shared libraries).  It will then process the relocations for each library. Some of these relocations are local to the library, while others require the runtime linker to resolve a global symbol.  In the latter case, the runtime linker will search through the list of libraries for this symbol. In ELF files, hash tables are used for the symbol lookup, so they‟re very fast.  Once all relocations have been applied, any initialization functions that have been registered in the shared library‟s init section are called. This is used in some implementations of C++ to call global constructors. W.11 SYMBOL NAME RESOLUTION  When the runtime linker loads a shared library, the symbols within that library have to be resolved. Here, the order and the scope of the symbol resolution are important.  If a shared library calls a function that happens to exist by the same name in several libraries that the program has loaded, the order in which these libraries are searched for this symbol is critical. This is why the OS defines several options that can be used when loading libraries.  All the objects (executables and libraries) that have global scope are stored on an internal list (the global list).  Any global-scope object, by default, makes available all of its symbols to any shared library that gets loaded.  The global list initially contains the executable and any libraries that are loaded at the program‟s startup. W.12 DYNAMIC ADDRESS TRANSLATION

 



 





 

In the view of the memory management, modern OS with multitasking, normally implement dynamic relocation instead of static. All the program layout in the address space is virtually same. This dynamic relocation (in processor term it is called dynamic address translation) provides the illusion that: 1. Each process can use addresses starting at 0, even if other processes are running, or even if the same program is running more than one time. 2. Address spaces are protected. 3. Can fool process further into thinking it has memory that’s much larger than available physical memory (virtual memory). In dynamic relocation the address changed dynamically during every reference. Virtual address is generated by a process (also called logical address) and the physical address is the actual address in physical memory at the run-time. The address translation normally done by Memory Management Unit (MMU) that incorporated in the processor itself. Virtual addresses are relative to the process. Each process believes that its virtual addresses start from 0. The process does not even know where it is located in physical memory; the code executes entirely in terms of virtual addresses. MMU can refuse to translate virtual addresses that are outside the range of memory for the process for example by generating the segmentation faults. This provides the protection for each process. During translation, one can even move parts of the address space of a process between disk and memory as needed (normally called swapping or paging). This allows the virtual address space of the process to be much larger than the physical memory available to it. Graphically, this dynamic relocation for a process is shown in Figure w.6

Figure w.6: Physical and virtual address: Address translation
Share this:

  

Twitter Facebook

Like this:

By sudhakarmaradana
2

JUN

62012

KWP2000 and UDS Difference
What are the difference between KWP2000 & UDS? 1. Event triggering and periodic transmission are applicable only in UDS. 2. Positive response supression for tester present is not present in KWP2000. 3. Transfer of measurement values, only two-byte identifers are available in UDS. In KWP2000 one byterecord Local Identifer and Two Byte Common Identifer 4. Error memory management. Differences between KWP 2000 and UDS The classic diagnostic communication with KWP protocols has favored a symmetrical number of requests and responses. In contrast, UDS provides event-driven and periodic services, for which the number of requests and responses can differ greatly. The KWP 2000 principles to transfer measurement values and to manage the ECU´s error memory were re-engineered for the UDS standard. Transfer of measurement values For the transfer of measurement values, only the two-byte dataIdentifiers are available with UDS. KWP 2000 specifies a one-byte recordLocalIdentifier and two-byte commonIdentifier. To increase data transmission efficiency, several measurement values can be requested with one UDS service request, and there are two different response types. The specified data identifiers are more comprehensive (see ISO 14229-1 annex C.1). Examples include: • $F100 … $F19F: for example, KWP 2000 identifier, calibration data, and ODX file identifier • $F2xx: Periodic data identifier • $F3xx: Dynamically defined data identifier • $F4xx … $F8xx: OBD according to ISO 15031-5 When measured values or bigger memory areas have to be transmitted via memory addressing,the addressAndLengthFormatIdentifier of the UDS standard provides more capable addressing. TheblockSequenceCounter constructs a more efficient data transfer, because a complete reset of the process in case of an error is not necessary. Error memory management KWP 2000 contains four services for the management of the error memory. These are $14 (clearDiagnosticInformation), $18 (readDTCByStatus), $17 (readStatusOfDTC), and $12 (readFreezeFrameData). In contrast, the UDS standard specifies only two services for the error memory management: $14 (clearDiagnosticInformation) and $19 (readDTCInformation). But due to the fact that there are 21 different subfunctions for the service request $19 (readDTCInformation), the abilities of these services are enhanced widely. The UDS standard contains approximately 60 pages of specifications for error memory management. Documents: Presentation_Debrecen_En_2008_03_27 K-Line  Reserved for diagnostic communication ECUs  Longer data packets can be transmitted required

CAN Diagnostic & continuous communication between A CAN frame is max. 8 bytes: encapsulation of request

   

Configurable communication speed configuration Arbitration must be implemented by SW (UART) HW Additional wire + HW Component (Layer1) Additional SW Driver for Layer 2 communication must be implemented

Fixed speed: because of the continuous bus Bus arbitration, CAN-frame structure is handled by Wire + required HW component already exists SW Drivers already exist, only sw of diagnostic

Differences between CANalyzer and CANoe:The CANalyzer and CANoe tools were developed to meet the essential needs of the CAN-based module or systemdeveloper by combining a comprehensive set of measurement and simulation capabilities.Both CANalyzer and CANoe can interface to multiple CAN networks (or other common small area network protocols),and provide accurate time-stamped measurements for all communication transfers, including both acknowledgedmessages and communication errors. Recording and playback operations are standard. Users can record themessages from one system and e-mail them to another engineer for playback and analysis.Both tools basically operate like a multi-channel oscilloscope, a multi-channel logic analyzer, and a customalphanumeric display unit – all using an integrated database.In addition, both tools are capable of creating any message generation pattern, much like a programmable functiongenerator, with complete control of all network data variables (or signals).As shown in Figure 3, both CANoe and CANalyzer share a major portion of the same network analysis interface.

Figure 3 – CANalyzer & CANoe Major Network Analysis Interfaces One Key Difference – Level of Node Control

One key difference between CANalyzer and CANoe is in the level of node control. Essentially, a single CANalyzer toolcan act as a single network member, but CANoe has no limit as to the number of modules with which it may substitute.As shown in Figure 4, CANalyzer supports the control of a single node (a single tester, or a single module simulation),while CANoe supports the control of a collection of multiple nodes (any number of module simulations or any number of testers).

Figure 4 – Level of Node Control Distinguishes Between CANalyzer and CANoe In CANoe, each node may be enabled to evaluate a simulation, or each node may be disabled to allow connection of areal module to the “remaining network simulation”. This can be done in real t ime for any number of nodes and for oneor more communication networks.As shown in Figure 5, the ability to interconnect a real module to CANoe that represents “all the other remainingnetwork members” provides a significant testing advantage in distributed product architecture. Figure 5 – Using CANoe to Simulate the Rest of the System The limitations when using CANoe depend on both the speed of the available PC and the amount of CAN hardwarethat can be placed on a single PC. While laptops are typically limited to 4 CAN network connections (2 PCMCIA cardswith 2 CAN channels each), desktop configurations with up to 32 CAN channels have been created for specialapplications. Graphic Panels – The Other Major Difference The second and quite distinctive difference from CANalyzer is that CANoe supports “graphic panels” for both inputsand outputs. This allows the user to construct “higher -level application” behavior to simulate actual inputs and outputs.For example, let‟s assume that your new project requires you to build a tester. Traditionally, you would typically choosebetween two alternatives: • Build a custom electronic module – design all the hardware and software yourself • Build a semi-customized PC-based system However, another choice is now available – you could construct the entire tester in CANoe and write the entireapplication in CAPL.CANoe allows you to construct tester panel interfaces to give inputs and outputs. You can add the necessary CAPLsoftware to interconnect your switch presses to the corresponding CAN transmit messages that you wish the tester to send. It is also easy to connect incoming CAN receive messages to your front panel graphic output devices. Inaddition, moving meters, blinking lights, and numerical display graphics are easy to create (see Figure 6).

Figure 6 – Example of CANoe graphics used for both Front Panel Input and Output Bit-mapped graphics and digital photos, as shown in Figure 7, of actual product front panels can be easily animated for use.

Figure 7 – Example of User-Designated Bitmapped Graphics
Share this:

  

Twitter Facebook

Like this:

By sudhakarmaradana
JUN

12012

CAN Basics
The Controller Area Network (CAN) protocol incorporates a powerful means of seamlessly preventing data corruption during message collision. This arbitration process and its relationship to the electrical

layer variables are explained. Techniques to force message collision and test arbitration are demonstrated with strategies to leverage arbitration as a quantitative benchmark in safety-critical systems. The benchmark is then applied to several example systems and results provided for comparison.

Introduction The ability of a Controller Area Network to manage message collision provides a unique proving ground for protocol compliance in any application. A means of determining a benchmark for a system‟s performance by measuring a network‟s ability to execute proper arbitration is developed in this example. It is demonstrated that while a CAN bus appears to be functioning normally, many arbitration errors may be unnoticed by system operators. CAN Implementations: In CAN there are two main Hardware Implementations, they are Basic CAN and Full CAN. Basic CAN: Basic CAN has only one Message buffer for Receive and Transmit messages. The received message is accepted or ignored after acceptance filtering. The decision to process a message or to ignore it is also achieved by acceptance filtering. This acceptance filtering of the node is done by software in Basic CAN. To reduce the software load at the nodes, there is a possibility to ignore some messages by ignoring specific identifiers. This is realized by bit mask for the message identifiers. Full CAN: In Full CAN, there are 8 to 16 memory buffers for every transmitted or received message. Here the acceptance filtering is done by hardware and not by the software. Every buffer can be configured to accept messages with specific ID‟s. Since the acceptance filtering is done by hardware, the software load is greatly reduced. With different

buffers for different messages ensures more time for the processing of the received messages and the transmitted message can be handled according to the priority levels. Configuring each buffer for every message ensures also the data consistency in Full CAN. Arbitration Basics Since any CAN node may begin to transmit when the bus is free, two or more nodes may begin to transmit simultaneously. Arbitration is the process by which these nodes battle for control of the bus. Proper arbitration is critical to CAN performance because this is the mechanism that guarantees that message collisions do not reduce bandwidth or cause messages to be lost. Each data or remote frame begins with an identifier, which assigns the priority and content of the message. As the identifier is broadcast, each transmitting node compares the value received on the bus to the value being broadcast. The higher priority message during a collision has a dominant bit earlier in the identifier. Therefore, if a transmitting node senses a dominant bit on the bus in place of the recessive bit it transmitted, it interprets this as another message with higher priority transmitting simultaneously. This node suspends transmission before the next bit and automatically retransmits when the bus is idle. The result of proper arbitration is that a high-priority message transmitted without interruption is followed immediately by a low-priority message, unless of course, another high-priority message attempts to broadcast immediately following the same message. Since no messages are lost or corrupted in the collision, data and bandwidth are not compromised. Electrical-Layer Variables (bit timing requirements) Each CAN bit is divided into four segments (see Figure 1). The first segment, the synchronization segment (SYNC_SEG), is the time that a recessive-to-dominant or dominant-to-recessive transition is expected to occur. The second segment, the propagation time segment (PROP_SEG), is designed to compensate for the physical delay times of the network as shown in Figure 2, and should be twice the sum of the propagation delay of the bus, the input comparator delay, and the output driver delay. The third and fourth segments, both phase buffer segments (PHASE_SEG1 & PHASE_SEG2), are used for resynchronization. The bit value is sampled immediately following PHASE_SEG1.

The bit rate may be changed by either changing the oscillator frequency, which is usually restricted by the processor requirements, or by specifying the length of the bit segments in “time quantum” and the prescaler value. The prescaler value is multiplied by the minimum time quantum, which is the reciprocal of the system clock frequency, 1/f sys , to determine the length of a working time quantum. Bit time may then be calculated as the sum of each bit segment, and the bit rate may be calculated as the reciprocal of this sum.

Each node must perform a hard synchronization upon every recessive-to-dominant edge after a bus idle or received start of frame. Hard synchronization is a restarting of the internal bit timing to force the edge into the SYNC_SEG, where edges are expected to occur. Resynchronization is performed on all other recessive-to-dominant edges of other received bits by lengthening or shortening the PHASE_SEG1 or PHASE_SEG2 by one to four time quanta as specified by the resynchronization jump width. If the difference between the edge causing resynchronization and the SYNC_SEG exceeds the resynchronization jump width, the effective result is the same as a hard synchronization. CAN Network Errors CAN protocol specifies five different types of network errors. A transmitting node detects a bit error when it monitors a bit value different than it is transmitting; the reaction to this condition varies with the nature of the error. A stuff error occurs when the bit-stuffing rule is violated – a bit of opposite value must be inserted immediately following any series of five consecutive bits of the same value in a message. A cyclic redundancy check (CRC) error occurs when a receiving node receives a different CRC sequence than anticipated. (Note that all nodes independently calculate the CRC sequence from the data field). A form error occurs when a field contains an illegal bit value. Finally, an acknowledgement (ACK) error occurs when the transmitter does not monitor a dominant bit in the ACK slot to signify that the message had been received properly by another node as shown in Figure 2. When a node detects a bus error, it transmits an error frame consisting of six dominant bits followed by eight recessive bits. Multiple nodes transmitting an error frame will not cause a problem because the first recessive bits will be overwritten. The result will remain six dominant bits followed by eight recessive bits, and cause the bus to be safely reset before normal communications recommence. The CAN protocol provides a means of fault confinement by requiring each node to maintain separate receive and transmit error counters. Either counter will be incremented by 1 or 8, depending on the type of error and conditions surrounding the error. The receive error counter is incremented for errors during message reception, and the transmit error counter is incremented for errors during message transmission (for further details, see reference 1). When either of these counters exceed 127, the node is declared “error-passive,” which limits the node from sending any further dominant error frames. When the transmitted error count exceeds 255, the node is declared “bus -off,” which restricts the node from sending any further transmissions. The receive and transmit error counters are also decremented by 1 each time a message is received or transmitted without error, respectively. This allows a node to return from errorpassive mode to error-active mode (normal transmission mode) when both counters are less than 128. The node may also return to error-active mode from bus-off mode after having received 128 occurrences of 11 consecutive recessive bits. Overall, a network maintains constant transmit and receive error counters if it averages eight properly transmitted or received messages for each error that occurs during transmission or reception, respectively. Controller Area Network (CAN) started life in 1983 at Robert Bosch GmbH as a serial data bus standard for the interconnection of microcontrollers in vehicles. Although originally designed specifically for automotive applications, it is now also used in other applications. The protocol was officially released in 1986, and the first CAN controller chips, produced by Intel and Phillips, were available commercially in 1987. The CAN 2.0 specification was published by Bosch in 1991. The data link and physical layers of CAN for data rates of up to 125 kbps (described as “low-speed serial data communication” were defined in part two of the

original ISO standard published in 1994 (ISO 11519). Part 1 of a later ISO standard published in 2003 (ISO 11898) covers the data link and physical layers of CAN, but for data rates of up to 1 Mbps. There are also a number of other related standards. The higher layer protocols used with CAN depend on the application. A number of microcontrollers (for example, Microchip Technology‟s PIC Microco ntrollers) now have CAN support built-in. A modern car will typically have in the order of fifty (and sometimes a lot more) electronic control units(ECUs) controlling various automotive sub-systems. The largest microprocessor unit in a car is usually the engine control unit (also, confusingly, commonly abbreviated to ECU). Other microprocessors control elements ranging from the transmission system and braking system, right down to cosmetic elements such in-car audio systems, and driving mirror adjustment. Some of these subsystems operate independently, but others need to communicate with each other and process and respond to data received from sensors. The CAN bus in a vehicle control system will typically connect the engine control unit with the transmission control system, for example. It is also highly suited to use as a fieldbus in general automation environments, and has become widely used for such applications, in part because of the low cost, small size and availability of many CAN controllers and processors. In automotive systems, they are an ideal alternative to expensive, cumbersome and unreliable wiring looms and connectors. A CAN network interconnects control devices, sensors and actuators (collectively referred to here asnodes). Every node attached to a CAN bus can send and receive data, but not at the same time. A message consists primarily of an identifier that identifies the type and sender of the message, and up to eight bytes of actual data. The physical medium in a CAN network is a differential two-wire bus (usually either unshielded or shielded twisted pair), and the signaling scheme used is Non-Return to Zero (NRZ) with bit stuffing. Because CAN is essentially a broadcast network, messages will be received by all nodes. The messages do not reach the devices directly, but via each node?s host-processor and CAN Controller. These elements sit between the node itself and the data bus. Any node may transmit a message providing the bus is free. If two or more nodes transmit at the same time, the system of arbitration is simply to give priority based on message ID number. The message with the higher priority ID will overwrite all other messages, and any nodes responsible for the lower priority messages will back off and wait before retransmitting. Each node will have a host-processor that interprets incoming messages and determines when it needs to send outgoing messages, sensors, actuators and control devices, which can be connected to the hostprocessor as required, and a CAN Controller which is implemented in hardware and has a synchronous clock. The CAN controller buffers incoming messages until they can be retrieved by the host-processor, generating an interrupt to let the host processor know that a message is waiting. The CAN Controller is also the buffer for outgoing messages, which it receives from the host-processor and then transmits via the bus. A transceiver handles message processing, and is usually integrated into the CAN Controller. The data rates possible are dependent on the length of the bus. Data rates of up to 1 Mbps are possible at network lengths below 40 metres. Decreasing the data rate to 125 kbps would allow a network length of up to 500 metres. Transmission of messages in a CAN is based on the producer-consumer (broadcast) principle. A message transmitted by one node (the producer) is received by all other nodes (the consumers). Messages do not have a destination address, but a Message ID. Messages in the standard format have an 11-bit Message ID, enabling 2,048 different messages to be defined for any one system – more than sufficient for most applications. For applications that require a larger number of messages, an extended message format with a 29-bit Message ID may be used, allowing over five hundred million different messages to be defined. Only certain messages will apply to each node on the network, so a node receiving a message

must apply acceptance filtering (usually implemented in hardware, and based on the Message ID). If the message received by a node is relevant to it, it will be processed, otherwise it will be ignored. CAN networks may be expanded without modification to existing hardware or software if the devices to be added are purely receivers, and if they only require messages that are already generated by the network.

Arbitration in CAN networks
The standard form of arbitration in a CAN network is Carrier Sense Multiple Access/Bitwise Arbitration(CSMA/BA). If two or more nodes start transmitting at the same time, arbitration is based on the priority level of the message ID, and allows the message whose ID has the highest priority to be delivered immediately, without delay. This makes CAN ideal for real-time, priority-based systems. Each node, when it starts to transmit its Message ID, will monitor the bus state and compare each bit received from the bus with the bit transmitted. If a dominant bit (0) is received when a recessive bit (1) has been transmitted, the node stops transmitting because another node has established priority. The concept is illustrated by the diagram below.

Bitwise arbitration in CAN networks Arbitration is performed as the identifier field is transmitted, and is non-destructive. Each node transmits its 11-bit Message ID, starting with the highest-order bit (bit 10). Binary zero (0) is a dominant bit, and binary one (1) is a recessive bit. Because a dominant bit will overwrite a recessive bit on the bus, the state of the bus will always reflect the state of the message ID with the highest priority (i.e. the lowest number). As soon as a node sees a bit comparison that is unfavourable to itself, it will cease to participate in the arbitration process and wait until the bus is free again before attempting to retransmit its message. The message with the highest priority will thus continue to be transmitted without delay, and unimpeded. In the above illustration, Node 2 transmits bit 5 as a recessive bit (1), while the bus level read is dominant (0), so Node 2 will back off. Similarly, Node 1 will back off after transmitting bit 2 as a recessive bit, whereas the bus level remains dominant. Node 3 is then free to complete transmission of its message. The Message ID for each system element is assigned by the system designer, and the arbitration method used ensures that the highest-priority messages will always be transmitted ahead of another message, should simultaneous transmissions occur. The bus is thus allocated on the basis of need. The only limiting factor is therefore the capacity of the bus itself. Outstanding transmission requests are dealt with

in their order of priority, with minimum delay and maximum utilisation of the available bus capacity. In any system, some parameters will change more rapidly than others. In a motor vehicle, for example, the rpm of the engine will change far more rapidly than the temperature of the engine coolant. The more rapidly changing parameters are probably going to need more frequent monitoring, and for this reason will probably be given a higher priority.

CAN Frame Format
The general format of a CAN message frame is shown below.

Data is transmitted using Message Frames. The standard CAN protocol (version 2.0A), also known as Base Frame Format, uses an 11-bit Message ID. The extended CAN protocol (version 2.0B), also now known as Extended Frame Format, supports both 11-bit and 29-bit Message IDs. Most version 2.0A controllers are tolerant of extended format messages, but essentially ignore them. Version 2.0B controllers can send and receive messages in both formats. The start of a message frame is signaled by a dominant start-of-frame bit, followed by the 11-bitMessage ID and the Remote Transmission Request (RTR) bit, which is only set if the message is adata request frame (as opposed to a data frame). It should probably be noted here that, although nodes on a CAN network generally send data without being polled, a node may request the transmission of a specific message by another node in the system. The first two bits ( r0 and r1) of the 6-bit control field specify the transmission format (i.e. standard or extended), while the last four bits form the Data Length Code (DLC), which indicates the number of bytes of data transmitted. The data field can contain from zero to eight bytes of data, and is followed by the 16-bit CRC field, containing a 15-bit cyclic redundancy check code which is used by the receiving node to detect errors, and a recessive delimiter bit. The ACKnowledge field has two bits. The first is the ACK Slot which is transmitted as a recessive bit, but will be overwritten with a dominant bit by any node that successfully receives the transmitted message. The second bit is a recessive delimiter bit. The end-of-frame field consists of seven recessive bits, and signals that error-free transmission of the message has been completed. The end-of-frame field is

followed by the intermission field consisting of three recessive bits, after which the bus may be considered to be free for use. Idle time on the bus may be of any length, including zero. At a data rate of 1 Mbps, it is possible to send in the order of ten thousand standard format messages per second over a CAN network, assuming an average data length of four bytes. The number of messages that could be sent would come down to around seven thousand if all the messages contained the full eight bytes of data allowed. One of the major benefits of CAN is that, if several controllers require the same data from the same device, only one sensor is required rather than each controller being connected to a separate sensor. As mentioned previously, the data rate that can be achieved is dependent on the length of the bus, since the bit time interval is adjusted upwards to compensate for any increase in the time required for signals to propagate along the bus, which is proportional to the length of the bus. Bus length and bit rate are thus inversely proportional.

Message frame format

Message frame for standard format (CAN Specification 2.0A)

The CAN protocol supports two message frame formats, the only essential difference being in the length of the identifier (ID). In the standard format the length of the ID is 11 bits and in the extended format the length is 29 bits. The message frame for transmitting messages on the bus comprises seven main fields. A message in the standard format begins with the start bit “start of frame”, this is followed by the “arbitration field”, which contains the identifier and the “RTR” (remote transmissi on request) bit, which indicates whether it is a data frame or a request frame without any data bytes (remote frame). The “control field” contains the IDE (identifier extension) bit, which indicates either standard format or extended format, a bit reserved for future extensions and – in the last 4 bits – a count of the data bytes in the data field. The “data field” ranges from 0 to 8 bytes in length and is followed by the “CRC field”, which is used as a frame security check for detecting bit errors. The “ACK field” comprises the ACK slot (1 bit) and the ACK delimiter (1 recessive bit). The bit in the ACK slot is sent as a recessive bit and is overwritten as a dominant bit by those receivers which have at this time received the data correctly (positive acknowledgement). Correct messages are acknowledged by the receivers regardless of the result of the acceptance test. The end of the message is indicated by “end of frame”. “Intermission” is the minimum number of bit periods separating consecutive messages. If there is no following bus access by any station, the bus remains idle (“bus idle”). Standard Data Frame

The CAN standard data frame is shown in Figure 2-1. As with all other frames, the frame begins with a Start- Of-Frame (SOF) bit, which is of the dominant state and allows hard synchronization of all nodes. The SOF is followed by the arbitration field, consisting of 12 bits: the 11-bit identifier and the Remote Transmission Request (RTR) bit. The RTR bit is used to distinguish a data frame (RTR bit dominant) from a remote frame (RTR bit recessive). Following the arbitration field is the control field, consisting of six bits. The first bit of this field is the Identifier Extension (IDE) bit, which must be dominant to specify a standard frame. The following bit, Reserved Bit Zero (RB0), is reserved and is defined as a dominant bit by the CAN protocol. The remaining four bits of the control field are the Data Length Code (DLC), which specifies the number of bytes of data (0 – 8 bytes) contained in the message. After the control field is the data field, which contains any data bytes that are being sent, and is of the length defined by the DLC (0 – 8 bytes). The Cyclic Redundancy Check (CRC) field follows the data field and is used to detect transmission errors. The CRC field consists of a 15-bit CRC sequence, followed by the recessive CRC Delimiter bit. The final field is the two-bit Acknowledge (ACK) field. During the ACK Slot bit, the transmitting node sends out a recessive bit. Any node that has received an error-free frame acknowledges the correct reception of the frame by sending back a dominant bit (regardless of whether the node is configured to accept that specific message or not). The recessive acknowledge delimiter completes the acknowledge field and may not be overwritten by a dominant bit. Extended Data Frame

In the extended CAN data frame, shown in Figure 2-2, the SOF bit is followed by the arbitration field, which consists of 32 bits. The first 11 bits are the Most Significant bits (MSb) (Base-lD) of the 29-bit identifier. These 11 bits are followed by the Substitute Remote Request (SRR) bit, which is defined to be

recessive. The SRR bit is followed by the lDE bit, which is recessive to denote an extended CAN frame. It should be noted that if arbitration remains unresolved after transmission of the first 11 bits of the identifier, and one of the nodes involved in the arbitration is sending a standard CAN frame (11-bit identifier), the standard CAN frame will win arbitration due to the assertion of a dominant lDE bit. Also, the SRR bit in an extended CAN frame must be recessive to allow the assertion of a dominant RTR bit by a node that is sending a standard CAN remote frame. The SRR and lDE bits are followed by the remaining 18 bits of the identifier (Extended lD) and the remote transmission request bit. To enable standard and extended frames to be sent across a shared network, the 29-bit extended message identifier is split into 11-bit (most significant) and 18-bit (least significant) sections. This split ensures that the lDE bit can remain at the same bit position in both the standard and extended frames. Following the arbitration field is the six-bit control field. The first two bits of this field are reserved and must be dominant. The remaining four bits of the control field are the DLC, which specifies the number of data bytes contained in the message. The remaining portion of the frame (data field, CRC field, acknowledge field, end-of-frame and intermission) is constructed in the same way as a standard data frame Remote Data Frame

Normally, data transmission is performed on an autonomous basis by the data source node (e.g., a sensor sending out a data frame). It is possible, however, for a destination node to request data from the source. To accomplish this, the destination node sends a remote frame with an identifier that matches the identifier of the required data frame. The appropriate data source node will then send a data frame in response to the remote frame request. There are two differences between a remote frame (shown in Figure 2-3) and a data frame. First, the RTR bit is at the recessive state and, second, there is no data field. In the event of a data frame and a remote frame with the same identifier being transmitted at the same time, the data frame wins arbitration due to the dominant RTR bit following the identifier. In this way, the node that transmitted the remote frame receives the desired data

immediately.

An error frame is generated by any node that detects a bus error. An error frame, shown in Figure 2-4, consists of two fields: an error flag field followed by an error delimiter field. There are two types of error flag fields. The type of error flag field sent depends upon the error status of the node that detects and generates the error flag field. 2.4.1 ACTIVE ERRORS If an error-active node detects a bus error, the node interrupts transmission of the current message by generating an active error flag. The active error flag is composed of six consecutive dominant bits. This bit sequence actively violates the bit-stuffing rule. All other stations recognize the resulting bit-stuffing error and, in turn, generate error frames themselves, called error echo flags. The error flag field, therefore, consists of between six and twelve consecutive dominant bits (generated by one or more nodes). The error delimiter field (eight recessive bits) completes the error frame. Upon completion of the error frame, bus activity returns to normal and the interrupted node attempts to resend the aborted message. 2.4.2 PASSIVE ERRORS If an error-passive node detects a bus error, the node transmits an error-passive flag followed by the error delimiter field. The error-passive flag consists of six consecutive recessive bits. The error frame for an errorpassive node consists of 14 recessive bits. From this it follows that, unless the bus error is detected by an erroractive node or the transmitting node, the message will continue transmission because the error-passive flag does not interfere with the bus. If the transmitting node generates an error-passive flag, it will cause other nodes to generate error frames due to the resulting bit-stuffing violation. After transmission of an error frame, an error-passive node must wait for six consecutive recessive bits on the bus before

attempting to rejoin bus communications. The error delimiter consists of eight recessive bits and allows the bus nodes to restart bus communications cleanly after an error has occurred. Overload Frame,Interframe Space

An overload frame, shown in Figure 2-5, has the same format as an active error frame. An overload frame, however, can only be generated during an interframe space. In this way, an overload frame can be differentiated from an error frame (an error frame is sent during the transmission of a message). The overload frame consists of two fields: an overload flag followed by an overload delimiter. The overload flag consists of six dominant bits followed by overload flags generated by other nodes (and, as for an active error flag, giving a maximum of twelve dominant bits). The overload delimiter consists of eight recessive bits. An overload frame can be generated by a node as a result of two conditions: 1. The node detects a dominant bit during the interframe space, an illegal condition. Exception: The dominant bit is detected during the third bit of IFS. In this case, the receivers will interpret this as a SOF. 2. Due to internal conditions, the node is not yet able to begin reception of the next message. A node may generate a maximum of two sequential overload frames to delay the start of the next message. CAN BUS MESSAGE FRAMES – Interframe SpaceThe interframe space separates a preceding frame (of any type) from a subsequent data or remote frame. The interframe space is composed of at least three recessive bits called the Intermission. This allows nodes time for internal processing before the start of the next message frame. After the intermission, the bus line remains in the recessive state (bus idle) until the next transmission starts. Detecting and signalling errors Unlike other bus systems, the CAN protocol does not use acknowledgement messages but instead signals any errors that occur.

Cyclic Redundancy Check (CRC)
The CRC safeguards the information in the frame by adding redundant check bits at the transmission end. At the receiver end these bits are re-computed and tested against the received bits. If they do not agree there has been a CRC error.

Frame check
This mechanism verifies the structure of the transmitted frame by checking the bit fields against the fixed format and the frame size. Errors detected by frame checks are designated “format errors”. ACK errors As mentioned above, frames received are acknowledged by all recipients through positive acknowledgement. If no acknowledgement is received by the transmitter of the message (ACK error) this may imply that there is a transmission error which has been detected only by the recipients, that the ACK field has been corrupted or that there are no receivers. The CAN protocol also implements two mechanisms for error detection at the bit level:

Monitoring
The ability of the transmitter to detect errors is based on the monitoring of bus signals: each node which transmits also observes the bus level and thus detects differences between the bit sent and the bit received. This permits reliable detection of all global errors and errors local to the transmitter.

Bit stuffing
The coding of the individual bits is tested at bit level. The bit representation used by CAN is NRZ (non-return-tozero) coding, which guarantees maximum efficiency in bit coding. The synchronization edges are generated by means of bit stuffing, i.e. after five consecutive equal bits the sender inserts into the bit stream a stuff bit with the complementary value, which is removed by the receivers. The code check is limited to checking adherence to the stuffing rule. If one or more errors are discovered by at least one station (any station) using the above mechanisms, the current transmission is aborted by sending an “error flag”. This prevents other stations accepting the message and thus ensures the consistency of data throughout in the network. After transmission of an erroneous message has been aborted, the sender automatically re-attempts transmission (automatic repeat request). There may again be competition for bus allocation. As a rule, retransmission will be begun within 23 bit periods after error detection; in special cases the system recovery time is 31 bit periods. However effective and efficient the described method may be, in the event of a defective station it might lead to all messages (including correct ones) being aborted, thus blocking the bus system if no measures for self-monitoring were taken. The CAN protocol therefore provides a mechanism for distinguishing sporadic errors from permanent errors and localizing station failures (fault confinement). This is done by statistical assessment of station error situations with the aim of recognizing a station‟s own defects and possibly entering an operating mode where the rest of the CAN network is not negatively affected. This may go as far as the station switching itself off to prevent messages erroneously recognized as incorrect from being aborted.

Extended format CAN messages
The SAE “Truck and Bus” subcommittee standardized signals and messages as well as data transmission protocols for various data rates. It became apparent that standardization of this kind is easier to implement when a longer identification field is available. To support these efforts, the CAN protocol was extended by the introduction of a 29-bit identifier. This identifier is made up of the existing 11-bit identifier (base ID) and an 18-bit extension (ID extension). Thus the CAN protocol allows the use of two message formats:  StandardCAN ( Version 2.0A ) und  ExtendedCAN ( Version 2.0B ) As the two formats have to coexist on one bus it is laid down which message has higher priority on the bus in the case of bus access collisions with differing formats and the same base identifier:

! the message in standard always has priority over the message in extended format.
Message frame standard format (CAN specification 2.0A) CAN controllers which support the messages in extended format can also send and receive messages in standard format. When CAN controllers which only cover the standard format (Version 2.0A) are used on one network, then only messages in standard format can be transmitted on the entire network. Messages in extended format would be misunderstood. However there are CAN controllers which only support standard format but recognize messages in extended format and ignore them (Version 2.0B passive). The distinction between standard format and extended format is made using the IDE bit (Identifier Extension Bit) which is transmitted as dominant in the case of a frame in standard format. For frames in extended format it is recessive. The RTR bit is transmitted dominant or recessive depending on whether data are being transmitted or whether a specific message is being requested from a station. In place of the RTR bit in standard format the SRR (substitute remote request) bit is transmitted for frames with extended ID. The SRR bit is always transmitted as recessive, to ensure that in the case of arbitration the standard frame always has priority bus allocation over an extended frame when both messages have the same base identifier. Message frame for extended format (CAN specification 2.0B) Unlike the standard format, in the extended format the IDE bit is followed by the 18-bit ID extension, the RTR bit and a reserved bit (r1). All the following fields are identical with standard format. Conformity between the two formats is ensured by the fact that the CAN controllers which support the extended format can also communicate in standard format.

Error detection and management

Nodes that transmit messages on a CAN network will monitor the bus level to detect transmission errors, which will be globally effective. In addition, nodes receiving messages will monitor them to ensure that they have the correct format throughout, as well as recalculating the CRC to detect any transmission errors that have not previously been detected (i.e. locally effective errors). The CAN protocol also has a mechanism for detecting and shutting down defective network nodes, ensuring that they cannot continually disrupt message transmission. When errors are detected, either by the transmitting node or a receiving node, the node that detects the error signals an error condition to all other nodes on the network by transmitting an error message frame containing a series of six consecutive bits of the dominant polarity. This triggers an error, because the bitstuffing used by the signalling scheme means that messages should never have more than five consecutive bits with the same polarity (when bit-stuffing is employed, the transmitter inserts a bit of opposite polarity after five consecutive bits of the same polarity. The additional bits are subsequently removed by the receiver, a process known as de-stuffing). All network nodes will detect the error message and discard the offending message (or parts thereof, if the whole message has not yet been received). If the transmitting node generates or receives an error message, it will immediately thereafter attempt to retransmit the message CAN gateway

A CAN/CAN gateway incorporates two CAN controller with a microcontroller. CAN messages are received by a CAN controller, processed by the microcontroller and then sent by the opposite CAN controller. Processed means that messages may be filtered, remapped to different CAN identifiers or that the data content of the messages may be altered. Also different baud rates may be used on both sides. A CAN/CAN gateway connects two CAN systems and controls the message exchange by applying rules and functions on these messages. This distinguishes the gateway from the repeater, which acts more or less like a piece of cable. This extended functionality leads to higher costs in respect to a CAN repeater. The main issue while using a CAN/CAN gateway is the latency time for a received message to be sent out on the other side. For an idle network this time is the propagation delay of the gateway. But as soon as there is busload this time gets undetermined. This has two reasons. Even if there is only busload on the receiving side, the propagation delay increases, because the microcontroller has to spend more and more time in the CAN receive interrupt routine and a message to be sent out has to wait for the processor to have time for doing so. If there is busload on both sides, the CAN controller has to wait for an idle bus to get access to it. This time increases with the busload and depends also on the CAN identifier used due

to their priority. Therefor the CAN system integrator has to take extra care on the busload, if a gateway is going to be used. From the application point of view the implications of this issue get obvious while e.g. looking on the SYNC message of the CANopen protocol. This message is used to synchronize actions on different CANopen nodes to the moment they receive the message. If the SYNC message is delayed by an arbitrary period of time, system behavior tends to get unpredictable. Filtering, also by using the acceptance filter of the CAN controller will help, but cannot reduce the effect completely. Due to the fact that the CAN/CAN gateway acts on messages, error frames will not be distributed from one side to the other. Furthermore CAN/CAN gateways can have the full extend of bus length on both sides for a given baud rate and the full count of modules. The motivation to use a CAN/CAN gateway is given, if two CAN systems should be connected, but the message flow has to be controlled. In this case the system integrator benefits from the versatile functionality offered by this devices. While e. g. developing a new electronic control unit (ECU) for an automotive application, the system designer may use a CAN/CAN gateway to shift identifiers or to modify data contents of specific messages and is able by this means to combine old and new units. A CAN/CAN gateway which routes the CAN messages over a media like an optical fiber ( CGFL,EtherCAN FX) or Ethernet (EtherCAN CI) using a protocol like TCP or UDP is called a CAN/CAN router. They extend the concept of a gateway to a larger physical expansion, but preserve the attributes of a gateway. This devices are able to realize point to point connections of CAN systems over distances up to 40km with low latencies and enable sophisticated CAN applications. CAN Repeater

Basic functioning scheme of a CAN repeater

A CAN repeater incorporates two CAN transceiver with a glue logic. It propagates a CAN signal from one side to the other and vice versa. Therefor an ideal CAN repeater acts like a piece of cable, it is transparent for the CAN signal. Due to the propagation delay of the two CAN transceiver and the glue logic an equivalent length can be given for a specific CAN repeater. It is about 40m for a repeater without galvanic decoupling and about 60m for a device with galvanic decoupling. The maximum length of a CAN system for a given baud rate cannot be extended by the use of a CAN repeater. But a CAN repeater allows to implement topologies different than a simple line. Stub lines or star topologies can be realized by using CAN repeaters.

If a CAN network is divided into two segments by a CAN repeater both of them must be terminated correctly with two 120R resistors at its ends. Both segments are physically independent, but form a single CAN network from the logical point of view. This means that the maximum length of a stub line is only determined by the maximum distance between two end points of the network. A CAN line, which has a stub somewhere in the middle, has three endpoints “A”, “B” and “C”. The maximum of the three distances “AB”, “AC” and “BC” is essential for determining the maximum baud rate in this specific system. It has to be noted that the equivalent cable length of the CAN repeater has to be taken into concern. For an example on this topic visit our article: CAN Repeater example A CAN repeater can be used to regenerate the CAN signal for very long CAN lines or it can help to increase the maximum count of nodes in a CAN system. Due to the fact that a CAN repeater is transparent for the CAN signal error frames are also propagated. But a repeater may offer the functionality to disconnect a segement, which is locked to a permanent dominant state. This can help to increase the system reliability. Most of our repeaters include this feature. Our CAN repeaters are offered with a parameter called inhibit time. It is important to choose the correct value for this parameter that the repeater will work as expected in a dedicated system. We recommend to set it to 10-20% of the bit time. More details on this subject can be found here: CAN inhibit time The most important motivation to use a CAN repeater is to implement a network topology which is not a line. This approach can help to decrease the overall length of a CAN system. If a galvanic decoupling between two parts of a CAN network is needed, it can be realized by a galvanic decoupled CAN repeater. A CAN repeater gives the opportunity to offer a solution to network problems, which otherwise could only be solved by higher costs or as worst case by having to use something different than CAN. CAN & Flexray differences:

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close