Show Menu
Cheatography

CS 232 Computer Organization Final Cheat Sheet (DRAFT) by

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Termin­ology

Tera
1012
Giga
109
Mega
106
Kilo
103
1 GHz
1x109 Hz
1 TB
1x1012 Byte
1 Byte
8 Bits

Conver­sions

Binary
Hex
Dec
1111
0F
15

2's Complement

0111
7
1000
-8
1111
-1

Logic Gates

AND
 
A B
OR
 
A+B
NOT
 
A
NAND
 
A B
NOR
 
A+B
XOR
 
A B

XOR Truth Table

A
B
A XOR B
0
0
0
0
1
1
1
0
1
1
1
0

Basic Identities

Name (Law)
AND form
OR form
Identity
1A = A
0+A = A
Null
0A = 0
1+A = 1
Idempotent
AA = A
A+A = A
Inverse
AA = 0
A+A = 1
Commut­ative
AB = BA
A+B = B+A
Associ­ative
(AB)C = A(BC)
(A+B)+C = A+(B+C)
Distri­butive
A+BC = (A+B)(A+C)
A(B+C) = AB+AC
Absorption
A(A+B) = A
A+AB = A
DeMorgan's
AB = A+B
A+B = AB

Boolean Concepts

Addition
Subtra­ction
1+1 = 10
0-1 = 1 (borrow method)
 
All other operations straig­htf­orward.

Boolean Concepts II

Carry out: Carry out of leftmost bit
Overflow: Result too large or small to fit into bits

S-R Latch

Decoder

Other Sequential Circuits

Multip­lexer
Demult­iplexer
Decoder
Encoder
Priority Encoder
 

Byte Ordering

Bytes in a word can be numbered from left to right or right to left. Big-en­dian: Most signif­icant byte (byte containing most signif­icant bit) is stored first and following bytes are stored in decreasing signif­icance order. Little­-endian is the opposite.

Memory Hierarchy

In order of increasing access time, storage capacity, and bits you get per dollar spent: Registers; Cache; Main Memory; Magnetic or solid state disk; Tape or Optical Disk.

Cache

Component that stores data to speed up future search requests. Cache hit - data can be found vs Cache miss - data cannot be found. Hit rate - % of accesses resulting in cache hits. Relatively small due to cost. Locality of reference - Temporal locality: reuse of specific data, and/or resources, within a relatively small time duration. Spatial locality: use of data elements within relatively close storage locations. Tradeoff: Larger caches have better hit rates but longer latency. Small fast caches backed up by larger, slower caches. Multi-­level caches: check fastest L1 cache first; if it hits, proceeds at HS. If L1 misses, L2 is checked, and so on, before accessing external memory.

Cache Mapping

Cache type
Hit Ratio
Search speed
Direct Mapped
Good
Best
N-Way Set Associ­ative
Very good, better as N increases
Good, worse as N increases
Fully associ­ative
Best
Moderate

Replac­ement Algorithms

Optimize management of cache - chooses which items to discard in a full cache
Each is a tradeoff between latency and hit ratio
Common: LRU, MRU, RR, 2-way set, Direct­-mapped

Write Policy

Write-­through
Write-back
Write is done synchr­onously both to cache and main mem
Write initially only to cache; write to main memory postponed until cache blocks containing data are about to be modifi­ed/­rep­laced
ADV: Simpler and more reliable, mem is always up-to-date
ADV: Faster, uses less memory bandwidth
DISADV: Write is slower, every write requires main memory access, uses more memory bandwidth
DISADV: More complex, must track "­dir­ty" locations

Mean access time

mean access time = c + (1-h)m
c - cache access time
h - hit ratio
m - main memory access time

Data Depend­encies

True data dependency (RAW)
Occurs when an instru­ction depends on result of previous instru­ction
WAR
Occurs when an instru­ction requires a value that is later updated
WAW
Occurs when the ordering of instru­ctions will affect the final output value of a variable
 

CISC vs. RISC

CISC
RISC
Emphasis on hardware
Emphasis on software
Includes multi-­clock complex instru­ctions
Single­-clock, reduced instru­ction only
Memory­-to­-me­mory: "­LOA­D" and "­STO­RE" incorp­orated in instru­ctions
Regist­er-­to-­reg­ister: "­LOA­D" and "­STO­RE" are indepe­ndent instru­ctions
Small code sizes, high cycles per second
Low cycles per second, large code sizes
MULT
LOAD LOAD PROD STORE

Perfor­mance equation

time/p­rogram = time/cycle x cycles­/in­str­uction x instru­cti­ons­/pr­ogram
CISC minimzes instru­ctions per program at cost of cycles per instru­ction
RISC reduces cycles per instru­ction at cost of number of instru­ctions per program

State Machines

Moore Machines: output is a function of the state
Mealy Machines: output is a function of the state and the input

Opcode

Portion of a machine language that specifies the operation to be performed

Addressing modes

Immediate
Direct
Indirect
Reg Direct
Reg Indirect
Reg Base-Ind
Reg Base-S­caled Ind

Basic ISA Classes

Accumu­lator
Stack
Gen Purp Reg
Load/Store
1 or 1+x address
0 address
2 or 3 address
3 address

Circuits

Speedup Calcul­ation

Speedup = (NT1)/[K+(­N-1)]T
Where N is # of instru­ctions, T1 is time for stage 1 CPU to complete instru­ction
K is # of stages for multistage and T is time for step + latch