Instructions ?
Cache ?
Memory (512 B) ?
Created by:
Harrison Davis
Satish Narayanasamy
University of Michigan

Feedback // Suggest a Feature
Load
ld [addr]

Store
st [addr]

For Loop
for [v] = [init]:[step]:[end]
  [loop content]
end

* [init], [step], and [end] are numbers
* [v] is the loop variable and must begin with a letter
* [v] is initialized to [init]
* after every iteration, [v] += [step]
* [step] defaults to 1 and can be omitted "[init]:[end]"
* the loop terminates when [v] == [end]
* c-syntax equivalent:
  for ([v] = [init]; [v] != [end]; [v] += [step]) {
    [loop content]
  }

Addresses
[addr] can be a number, variable, or mathematical expression using
the {+,-,*,/,(,)} operators in conjunction with numbers or variables.
Memory is 512 B and byte addressable. Therefore, addresses are log2(512) -> 9 bits.

Memory is broken up into blocks. Each block contains [block-size] bytes and each
is uniquely identified by its tag and associated set.

This column is broken up into 3 sections.

Left: Set Color
This section is split up vertically into blocks. Each block is colored with the
color of the set it's associated with.

Middle: Cached, Or Not
This section is split up vertically into blocks. A block is filled in if it is
cached and left empty if it is not.

Right: Tag
This section is split up vertically and labels the tag for the memory blocks
shown to the left. Tag labels may be omitted if there is not enough room to display
them all.
Addresses are 9 bits long as they need to address into 512 bytes (2^9 = 512).

The bits of an address are split into three sections: the tag, set index, and block offset.

Tag: Uniquely identifies a block within a set.
Set Index: Identifies the set a block belongs to.
Block Offset: Determines which byte within the block this address refers to.

Read the help info for tag, set index, and block offset for more information.
The tag uniquely identifies a block within a set.

When an address is accessed, its tag and set index are calculated. If its tag
matches a tag within the set specified by set index, then there is a cache hit.
Otherwise, the appropriate block is loaded from memory.

The number of bits in the tag depend on the number of bits in the set index and
block offset. Essentially, the tag bits are the leftover bits after the set index
block offset are calculated.

[addr] = [tag bits] | [set index bits] | [block offset bits]

[# tag bits] = [# addr bits] - [# set index bits] - [# block offset bits]
Bit range: [# addr bits] - 1 : [# set index bits] + [# block offset bits]
Identifies the set a block belongs to.

[addr] = [tag bits] | [set index bits] | [block offset bits]

[# set index bits] = log2(# of sets)
Bit range: [# set index bits] + [# block offset bits] - 1 : [# block offset bits]
Determines which byte within the block this address refers to.

The spatial locality principle states that if one address is
accessed, others near it are likely to be accessed too. Arrays
are good examples of this. When an address misses the cache, its
entire block is loaded from memory and and stored in the cache.
The next time that address or one near it is accessed, there
will be a cache hit.

[addr] = [tag bits] | [set index bits] | [block offset bits]

[# block offset bits] = log2(block size)
Bit range: [# block offset bits] - 1 : 0
Caches can be configured in different ways, each providing
benefits that might not be obvious in this simulator.

Fully-Associative: A cache with one set. In this layout,
a memory block can go anywhere within the cache. The benefit
of this setup is that the cache always stores the most recently
used blocks. The downside is that every cache block must be
checked for a matching tag. While this can be done in parallel
in hardware, the effects of fan-out increase the amount of time
these checks take.

Direct-Mapped: A cache with many sets and only one block
per set. The benefit here is that only one block has to be
checked for a matching tag, which is much faster than a fully-
associative cache. The downside is that each memory block can
only go to one location in the cache. Therefore, if you access
two blocks, one after the other, and each are mapped to the same
set, you'd miss the cache everytime.

Set-Associative: A mix of fully-associative and direct-mapped.
In this layout, there are multiple sets, but also muptiple blocks
per set. The number of tags to check is still limited and there
still several places each block can go. Most caches today are
set-associative.
This cache uses an LRU replacement policy, indicated by the
color of cached blocks. The more colorful a block, the more
recently it's been used. Hover over blocks in the cache or
memory to see more information about it.