RISC vs. CISC & Processor Architecture #IS311

cpuTwo philosophies have driven the design of microprocessors. One perspective uses complex instruction set computing (CISC) which deliberately includes complex instructions. This methodology allows for simpler machine-language programs at the expense of adding additional control unit circuitry (Burd, 2016, p. 120). Leading chip manufacturers such as Intel and AMD have placed more emphasis on increasing processor speed to accommodate for the extra instruction cycles. The contrasting school of thought uses a reduced instruction set computing (RISC) methodology. These processors avoid instructions that combine data transformation and data movement operations. RISC processors have the advantage of being the processor architecture of choice for computationally intensive applications. These two opposing architectures have been in existence over the past 50 years mostly for backwards compatibility purposes. Intel processors include approximately 678 different instruction sets and the chip manufacturer must be able to provide backward compatibility for programs written on older platforms (Burd, 2016, p. 134). Because of RISC’s simpler instruction set design, it is believed that these processors use less power than CISC and are optimal for battery-powered and low-power devices (Clark, 2013). Blem, Menon, and Sankaralingam compared these two architectures and found that neither processor specification was more energy efficient than the other (2013). Although two different processor design methodologies continue to be prevalent in the marketplace, there doesn’t appear to be an emerging leader in the near future.

Cache Memory

While processor speed and performance continues to be an important factor in system architecture, the CPU will continue to need efficient ways of accessing data for input, processing, and output. The CPU has an integrated set of methods to take advantage of its multiple cores and high clock speeds. For example, the use of cache memory, a special storage area (usually RAM) can be used to improve system performance. Although volatile, cache can use algorithms to predict what data is used most frequently from secondary storage. Because primary memory is usually limited in storage capacity and more expensive, secondary storage in the form of magnetic media is most often used to store files such as databases, video, and program files. Magnetic media uses multiple platters that spin on a servo motor to be accessed by a read/write head. As a result, it takes time for the CPU to request that data through device controllers and eventually locate the data on the physical disk. In relational database management systems like Oracle, for example, using cache algorithms can significantly reduce query times for processing and reporting.

Chip architecture, cache, and secondary storage are all interrelated when considering design choices for system architecture. The available technology, performance implications, interoperability, and future technology all affect system performance. In the past 10 years, multi-core processors have changed the way data center managers think about virtualization technology to reduce server sprawl and increase server utilization percentage. However, this technology wouldn’t have come to fruition if processor designers hadn’t run up against the “power wall” by continuing to increase clock speed (Venu, 2011). Quantum computing continues to be researched so that, eventually, it can scale to become more affordable. Storage technology is now being thought about in the same vein as researchers are trying to write data at the atomic level on copper sheets using chlorine at sub-zero temperatures (The Economist, 2016). The innovators and engineers will continue to rethink our preconceived notion to allow for faster processing and higher storage capacities to meet the demands of the marketplace.

References

Atoms and the voids. (2016, July 23). The Economist (US)

Blem, E., Menon, J., & Sankaralingam, K. (2013). Power struggles: Revisiting the RISC vs. CISC debate on contemporary ARM and x86 architectures. Proceedings – International Symposium on High-Performance Computer Architecture, (Hpca), 1–12. https://doi.org/10.1109/HPCA.2013.6522302

Burd, S.D. (2016). Systems Architecture 7e. Boston, MA: Cengage Learning

Clark, J. (2013, April 09). ARM Versus Intel: Instant Replay of RISC Versus CISC. Retrieved September 17, 2016, from http://www.datacenterjournal.com/arm-intel-instant-replay-risc-cisc/

Venu, B. (2011). Multi-core processors-An overview. arXiv Preprint arXiv:1110.3535. Retrieved from http://arxiv.org/abs/1110.3535

Moore’s Law and Today’s Technology #IS311

The ability to capture, process, store, and present information is faster and less expensive today than it was 50 years ago. Gordon Moore, an engineer at Intel, predicted in 1965 that computing would dramatically increase in power while decreasing in relative cost, roughly every two years (Intel, n.d.). According to a recent article in The Economist, this maxim has stood the test of time over the past 50 years. However, the traditional method of shrinking the size of the transistor to pack more of them onto a processor is reaching its fundamental limit (2016). This technical limit has led engineers to move beyond the principles of classical physics which rely on mathematical rules by using clearly defined binary physical states (Burd, 2016, p. 24). In order to continue to improve processing capabilities, quantum physics is being used by combining classical physics with matter at the subatomic level such that matter can be in multiple states at the same time in a qubit (Burd, 2016, p. 24-25). This nascent technology is still being prototyped, and is far too expensive to hit the public market at this time.

As new architectures for computing are developed, engineers will need to pay particular attention to memory addressing. Today’s Intel processors maintain backwards compatibility for the original 8086 microprocessor which makes it difficult to process an increasing number of bits using faster methodologies (Burd, 2016, p 89-90). This was evidenced at the turn of the millennium when larger computer classes began using 64-bit addressing. The change in architecture caused software compatibility issues despite Intel providing memory addressing based on either 32-bit or 64-bit addressing.

While new advances in processor technology are developed, the average citizen has access to a wide array of consumer electronics that rely upon the classical processor. These microcomputer devices can include smartphones, tablets, e-readers, laptops, and desktop computers. These devices typically support tasks such as browsing the web, creating documents, editing spreadsheets, curating photos, using apps (or applications), or performing business functions using accounting software packages (Burd, 2016, 35). This class of computers sometimes challenges the definition of a workstation which it commonly referred to as a more powerful desktop computer. Workstations are often found in use for applications that require additional primary memory (RAM) for simultaneously running programs, graphics capabilities for applications such as AutoCAD, or multiple CPUs for statisticians who require faster processing capabilities. It may be argued, in support of Moore’s Law, that the capabilities of a workstation may resemble the specifications of the next generation of desktops.

Even though it may be easy for an average consumer to purchase off-the-shelf computing devices for casual personal use, it requires deep technical understanding of the technology to implement, test, and deploy systems for use in the enterprise. Understanding how these components interoperate is critical to a project’s success. In order to manage computing resources effectively, one must stay abreast of future technology trends through unbiased sources, such as those from professional organizations that are funded by memberships rather than specific vendors (Burd, 2016, 8-9).

References

50 Years of Moore’s Law. (n.d.) Retrieved September 12, 2016, from http://www.intel.com/content/www/us/en/silicon-innovations/moores-law-technology.html#

Burd, S.D. (2016). Systems Architecture 7e. Boston, MA: Cengage Learning

Double, Double, Toil and Trouble. (2016, March 12). The Economist (US)

Practice: An Android App with Button

In this exercise, I build a Youth Hostel App that displays information about a given youth hostel. I couldn’t resist finding a hostel in Modena, Italy as I can’t wait to return to my favorite place to visit (well, besides Disney World!). The Italian countryside is probably the most picturesque places I have seen on this planet. This Android app is a continuation of my post from last week, so consider this practice if you’ve been following along. The next lesson on the docket includes input from the user, so stay tuned.

 

Building my 2nd Android App

Lab 2: Simplify! The Android User Interface

bruschetta_complete

In this 2nd lab, an application will be developed that includes a button action to move from the main screen to a 2nd screen. Widgets such as TextView, buttons, and ImageView will be used to build the application in conjunction with the strings.xml file and Translations Editor. To do this, a 2nd activity will be created that includes the editing of a new Java class file. To follow along with this lab, an image file is used that can be downloaded. The lab will be capped off by testing our new recipe app in the virtual emulator.

If you’re tuning in for the first time, you can start from the beginning on my Android Boot Camp page.