Computer architecture can be defined as both an art and science because it involves designing an interface between the selected hardware and software components to develop a functional, energy saving, cost effective and performance based computer. Furthermore, computer architecture entails designing a control and a data path for ARM processor, making machines to perform simultaneous tasks through execution of simple superscalar and pipelining (Blanchet and Bertrand 21). Computer architecture also has an aspect of creating storage systems and fast memory. It is essential to note that a computer is a complex device that constitutes various sub-systems, and each sub-system forms part of computer hardware. Therefore, the interconnection and interaction of these subsystems make up what we call computer architecture. The components of computer architecture include the instruction architecture, micro-architecture, and system design.
The Instruction architecture is a set of machine language and structure that a programmer should have enough knowledge before installing a right program to that machine. Therefore, a good instructional architecture must be easy to implement by the designer. Additionally, it must be programmable, and its compatibility should serve for the next 30years. Thus, its feature should be unique to meet the purpose of the designer, programmer, and the organization. On the other hand, the division of the subsystem memory or main memory into data and instruction caches is called Harvard architecture. Two paths to performance help to design a high-performance processor that works within the bounds of available expenditure, size constraints, and power set by the market demand. Computer operations can be faster using the semiconductor processes that allow the transistors to switch quickly as well as improve the signal propagation. Execution-unit latency reduces if more transistors are used (Blanchet and Bertrand 31). However, to crack the degree of logic before implementation of a particular function, aggressive design techniques are applicable, for example, custom vs. standard cell layout. Similarly, to increase circuit speed, a designer can use the dynamic vs. static circuit.
Pipelining is also important in the execution of instruction in a specified period. However, the extending the pipeline leads to the creation of a high number of instructions initiated in a given period. Hence, longer pipelines suffer from increased percentage of stalls. To attain parallelism in PC processors, superscalar and pipelining techniques are necessary to explore the instruction level parallelism (Hennessy and David 343). Superscalar overlaps instructions in space on different resources while pipelined processors overlap instructions in time on related execution resources. In most circumstances, gaining performance through parallelism fails because it does not meet the set expectations. Apart from this failure, stalls always arise as a result of data hazards, which imply data dependencies and also emanates from control risks. Control hazards come as a consequence of a change in a program flow. Structural dangers that arise from hardware resource conflict, data hazards, and control hazards sabotage the efficiency of pipelining.
The increase in cycles per minute (CPI) as a consequence of Superpipelining increases the execution time for instruction since the stages do not operate evenly prolonging the frequency (Hennessy and David 261). The superscalar method also falls victim into this category. The longer pipeline is advantaged over the CPI because it enhances performance. The clock skews, and latch overheads take a larger portion of the cycle, allowing little time for logic execution. It is rare and difficult to achieve efficiency through shortening of pipelines. In today's world, designers take keen in designing microprocessors with high frequency because frequency overpowers performance in basing on the market requirements. The knowledge of gained from computer architecture is significant in creating an understanding of the execution of instruction at the micro level. It is also vital to data flow, tradeoffs on the best software and hardware and cost minimization in acquiring and installing hardware and software.
Work cited
Blanchet, Gerard, and Bertrand Dupouy. Computer Architecture. London: Iste, 2013. Internet resource. Retrived from: www.worldcat.org/oclc/826685380. 29th April 2017.
Hennessy, John L, and David A. Patterson. Computer Architecture: A Quantitative Approach. San Francisco, Calif: Morgan Kaufmann, 2011. Print.
Cite this page
Essay on Computer Architecture. (2021, Jun 25). Retrieved from https://midtermguru.com/essays/essay-on-computer-architecture
If you are the original author of this essay and no longer wish to have it published on the midtermguru.com website, please click below to request its removal:
- Essay Sample: Is the Internet a Useful Technological Advancement or Not?
- Essay on Effects of Looking Down at Gadgets
- Paper Example on Information Technology (IT) and Security
- Research Paper Example on Cybersecurity
- Retailers Leverage Technology to Stay Ahead - Essay Sample
- Understaffing: Is Management Ignorant of the Problem? - Essay Sample
- The Internet of Things: Revolutionizing Industrial Automation and Monitoring - Essay Sample