Thursday 26 February 2015

Learn Embedded Systems Design on ARM based Micro controllers.




about ARM
                          ARM makes 32-bit and 64-bit  RISC multi-core processors . RISC processors  are designed to perform a smaller number of types of computer instructions  so that they can operate at a higher speed, performing more millions of instructions per second (MIPS ).  By stripping out unneeded instructions and optimizing pathways, RISC processors provide outstanding performance at a fraction of the power demand of CISC  (complex instruction set computing) devices.
ARM processors are extensively used in consumer electronic devices such as smartphones ,tablets , multimedia players and other mobile devices, such as wearables . Because of their reduced instruction set, they require fewer transistors, which enables a smaller die size for the integrated circuitry (IC). The ARM processor’s smaller size, reduced complexity and lower power consumption makes them suitable for increasingly miniaturized devices.
ARM processor features include:
  • Load/store architecture.
  • An orthogonal  instruction set.
  • Mostly single-cycle execution.
  • Enhanced power-saving design.
  • 64 and 32-bit execution states for scalable high performance.
  • Hardware virtualization  support.
The simplified design of ARM processors enables more efficient multi-core processing and easier coding for developers. While they don't have the same raw compute throughput  as the products of x86  market leader Intel, ARM processors sometimes exceed the performance of Intel processors for applications that exist on both architectures.
The head-to-head competition between the vendors is increasing as ARM is finding its way into full size notebooks.  Microsoft, for example, offers ARM-based versions of Surface computers. The cleaner code base of Windows RT versus x86 versions may be also partially responsible -- Windows RT is more streamlined because it doesn’t have to support a number of legacy hardwares.
ARM is also moving into the server market,  a move that represents a large change in direction and a hedging of bets on performance-per-watt over raw compute power. AMD offers 8-core versions of ARM processors for its Opteron series of processors. ARM servers represent an important shift in server-based computing. A traditional x86-class server with 12, 16, 24 or more cores increases performance by scaling  up the speed and sophistication of each processor, using brute force speed and power to handle demanding computing workloads.
In comparison, an ARM server uses perhaps hundreds of smaller, less sophisticated, low-power processors that share processing tasks among that large number instead of just a few higher-capacity processors. This approach is sometimes referred to as “scaling out,” in contrast with the “scaling up” of x86-based servers.
The ARM architecture was originally developed by Acorn Computers in the 1980s.

Wednesday 25 February 2015

How the Touch Screen works.



History of Touch Screen:
1970s: Resistive touch screens are invented. Although capacitive touch screens were designed first, they were eclipsed in the early years of touchby resistive touch screens. American inventor Dr. G. Samuel Hurst developed resistive touch screens almost accidentally.


Working of a Touch Screen:
           Touch-screen monitors have become more and more commonplace as their price has steadily dropped over the past decade. There are three basic systems that are used to recognize a person's touch:
  • Resistive
  • Capacitive
  • Surface acoustic wave
The resistive system consists of a normal glass panel that is covered with a conductive and a resistive metallic layer. These two layers are held apart by spacers, and a scratch-resistant layer is placed on top of the whole setup. An electrical current runs through the two layers while the monitor is operational. When a user touches the screen, the two layers make contact in that exact spot. The change in the electrical field is noted and the coordinates of the point of contact are calculated by the computer. Once the coordinates are known, a special driver translates the touch into something that theoperating system can understand, much as a computer mouse driver translates a mouse's movements into a click or a drag.
In the capacitive system, a layer that stores electrical charge is placed on the glass panel of the monitor. When a user touches the monitor with his or her finger, some of the charge is transferred to the user, so the charge on the capacitive layer decreases. This decrease is measured in circuits located at each corner of the monitor. The computer calculates, from the relative differences in charge at each corner, exactly where the touch event took place and then relays that information to the touch-screen driver software. One advantage that the capacitive system has over the resistive system is that it transmits almost 90 percent of the light from the monitor, whereas the resistive system only transmits about 75 percent. This gives the capacitive system a much clearer picture than the resistive system.
On the monitor of a surface acoustic wave system, two transducers (one receiving and one sending) are placed along the x and y axes of the monitor's glass plate. Also placed on the glass are reflectors -- they reflect an electrical signal sent from one transducer to the other. The receiving transducer is able to tell if the wave has been disturbed by a touch event at any instant, and can locate it accordingly. The wave setup has no metallic layers on the screen, allowing for 100-percent light throughput and perfect image clarity. This makes the surface acoustic wave system best for displaying detailed graphics (both other systems have significant degradation in clarity).
Another area in which the systems differ is in which stimuli will register as a touch event. A resistive system registers a touch as long as the two layers make contact, which means that it doesn't matter if you touch it with your finger or a rubber ball. A capacitive system, on the other hand, must have a conductive input, usually your finger, in order to register a touch. The surface acoustic wave system works much like the resistive system, allowing a touch with almost any object -- except hard and small objects like a pen tip.
As far as price, the resistive system is the cheapest; its clarity is the lowest of the three, and its layers can be damaged by sharp objects. The surface acoustic wave setup is usually the most expensive.

Click here:- wikipedia about Touch screen



Tuesday 24 February 2015

Looking forward: Intel to move away from silicon chips at 7nm



Intel plans to address the state of their 14nm, 10nm and future 7nm chips at the International Solid-State Circuits Conference (ISSCC) this year. While the majority of their presentation will be focused around their current 14nm chips, the company plans to discuss how it will continue the journey 10nm and even to 7nm in the years to come.
Intel’s 14nm technology is what we have access to currently, and the first chips based on the 10nm process won’t be available to the public until early 2017. Moving from 14nm to the 10nm manufacturing node is a difficult process, though Intel supposedly has a way to further their technology into smaller, lighter chips. But what’s more interesting than the future 10nm tech is the move to 7nm. To make chips that small, Intel says they’ll need to recruit the help from new materials, which means that 10nm chips will likely be as far as silicon takes us. Some reports state that the replacement material for silicon will be a III-V semiconductor like indium gallium arsenide (InGaAs), though Intel hasn’t commented on silicon’s replacement yet. The shift to III-V semiconductors, or whatever material the company plans to use, will allow the chips to consume less power while integrating more features into a single die.
Moreover, the company plans to continue Moore’s Law as they journey to 7nm chips. Moore’s Law is an ideal thought up by Gordon Moore, the company’s co-founder, that states that the number of transistors incorporated in a chip will approximately double every 24 months.
Intel was forced to delay their 14nm Broadwell chips by several months due to manufacturing issues. But Intel executive Mark Bohr plans to address this issue at ISSCC 2015. When asked about the delays, Bohr stated:
I think we may have underestimated the learning rate—when you have a technology that adds many more masks, as 14[nm] did…it takes longer to execute experiments in the fab and get information turned, as we describe it. That slowed us down more than we expected and thus it took longer to fix the yields. But we’re into high yields now, and in production on more than one product, with many more to come later this year.
Intel remains confident that the new 10nm chips won’t be delayed upon launch, and that the pilot 10nm manufacturing line is running around 50% faster than that of the 14nm chips. As for how the eventual move to 7nm relates to mobile? While the affects of a move away from silicon will first be seen by desktop and laptop devices, it’s only a matter of time before mobile chips follow suit.

Sunday 22 February 2015

The beginning of Embedded Systems



         An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. Embedded systems control many devices in common use today. 
         One of the very first recognizably modern embedded systems was the Apollo Guidance Computer, developed by Charles Stark Draper at the MIT Instrumentation Laboratory. At the project's inception, the Apollo guidance computer was considered the riskiest item in the Apollo project as it employed the then newly developed monolithic integrated circuits to reduce the size and weight. An early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that was the first high-volume use of integrated circuits. This program alone reduced prices on quad nand gate ICs from $1000/each to $3/each[citation needed], permitting their use in commercial products.
       Since these early applications in the 1960s, embedded systems have come down in price and there has been a dramatic rise in processing power and functionality. An earlymicroprocessor for example, the Intel 4004, was designed for calculators and other small systems but still required external memory and support chips. In 1978 National Engineering Manufacturers Association released a "standard" for programmable microcontrollers, including almost any computer-based controllers, such as single board computers, numerical, and event-based controllers.