close
1.

図書

図書
Randal Bryant, David O'Hallaron著 ; 五島正裕, 河野健二, 南出靖彦監訳
出版情報: 東京 : 丸善出版, 2019.2  xxvii, 873p ; 26cm
2.

図書

図書
Randal E. Bryant, David R. O'Hallaron,
出版情報: Boston ; Tokyo:Pearson, [2015]  xxxvi, 1084 p. ; 24 cm
目次情報: 続きを見る
Preface
About the Authors
A Tour of Computer Systems / 1:
Information Is Bits + Context / 1.1:
Programs Are Translated by Other Programs into Different Forms / 1.2:
It Pays to Understand How Compilation Systems Work / 1.3:
Processors Read and Interpret Instructions Stored in Memory / 1.4:
Hardware Organization of a System / 1.4.1:
Running the hello Program / 1.4.2:
Caches Matter / 1.5:
Storage Devices Form a Hierarchy / 1.6:
The Operating System Manages the Hardware / 1.7:
Processes / 1.7.1:
Threads / 1.7.2:
Virtual Memory / 1.7.3:
Files / 1.7.4:
Systems Communicate with Other Systems Using Networks / 1.8:
Important Themes / 1.9:
Amdahl's Law / 1.9.1:
Concurrency and Parallelism / 1.9.2:
The Importance of Abstractions in Computer Systems / 1.9.3:
Summary / 1.10:
Bibliographic Notes
Solutions to Practice Problems
Program Structure and Execution / Part 1:
Representing and Manipulating Information / 2:
Information Storage / 2.1:
Hexadecimal Notation / 2.1.1:
Data Sizes / 2.1.2:
Addressing and Byte Ordering / 2.1.3:
Representing Strings / 2.1.4:
Representing Code / 2.1.5:
Introduction to Boolean Algebra / 2.1.6:
Bit-Level Operations in C / 2.1.7:
Logical Operations in C / 2.1.8:
Shift Operations in C / 2.1.9:
Integer Representations / 2.2:
Integral Data Types / 2.2.1:
Unsigned Encodings / 2.2.2:
Two's-Complement Encodings / 2.2.3:
Conversions between Signed and Unsigned / 2.2.4:
Signed versus Unsigned in C / 2.2.5:
Expanding the Bit Representation of a Number / 2.2.6:
Truncating Numbers / 2.2.7:
Advice on Signed versus Unsigned / 2.2.8:
Integer Arithmetic / 2.3:
Unsigned Addition / 2.3.1:
Two's-Complement Addition / 2.3.2:
Two's-Complement Negation / 2.3.3:
Unsigned Multiplication / 2.3.4:
Two's-Complement Multiplication / 2.3.5:
Multiplying by Constants / 2.3.6:
Dividing by Powers of 2 / 2.3.7:
Final Thoughts on Integer Arithmetic / 2.3.8:
Floating Point / 2.4:
Fractional Binary Numbers / 2.4.1:
IEEE Floating-Point Representation / 2.4.2:
Example Numbers / 2.4.3:
Rounding / 2.4.4:
Floating-Point Operations / 2.4.5:
Floating Point in C / 2.4.6:
Homework Problems / 2.5:
Machine-Level Representation of Programs / 3:
A Historical Perspective / 3.1:
Program Encodings / 3.2:
Machine-Level Code / 3.2.1:
Code Examples / 3.2.2:
Notes on Formatting / 3.2.3:
Data Formats / 3.3:
Accessing Information / 3.4:
Operand Specifiers / 3.4.1:
Data Movement Instructions / 3.4.2:
Data Movement Example / 3.4.3:
Pushing and Popping Stack Data / 3.4.4:
Arithmetic and Logical Operations / 3.5:
Load Effective Address / 3.5.1:
Unary and Binary Operations / 3.5.2:
Shift Operations / 3.5.3:
Discussion / 3.5.4:
Special Arithmetic Operations / 3.5.5:
Control / 3.6:
Condition Codes / 3.6.1:
Accessing the Condition Codes / 3.6.2:
Jump Instructions / 3.6.3:
Jump Instruction Encodings / 3.6.4:
Implementing Conditional Branches with Conditional Control / 3.6.5:
Implementing Conditional Branches with Conditional Moves / 3.6.6:
Loops / 3.6.7:
Switch Statements / 3.6.8:
Procedures / 3.7:
The Run-Time Stack / 3.7.1:
Control Transfer / 3.7.2:
Data Transfer / 3.7.3:
Local Storage on the Stack / 3.7.4:
Local Storage in Registers / 3.7.5:
Recursive Procedures / 3.7.6:
Array Allocation and Access / 3.8:
Basic Principles / 3.8.1:
Pointer Arithmetic / 3.8.2:
Nested Arrays / 3.8.3:
Fixed-Size Arrays / 3.8.4:
Variable-Size Arrays / 3.8.5:
Heterogeneous Data Structures / 3.9:
Structures / 3.9.1:
Unions / 3.9.2:
Data Alignment / 3.9.3:
Combining Control and Data in Machine-Level Programs / 3.10:
Understanding Pointers / 3.10.1:
Life in the Real World: Using the GDB Debugger / 3.10.2:
Out-of-Bounds Memory References and Buffer Overflow / 3.10.3:
Thwarting Buffer Overflow Attacks / 3.10.4:
Supporting Variable-Size Stack Frames / 3.10.5:
Floating-Point Code / 3.11:
Floating-Point Movement and Conversion Operations / 3.11.1:
Floating-Point Code in Procedures / 3.11.2:
Floating-Point Arithmetic Operations / 3.11.3:
Defining and Using Floating-Point Constants / 3.11.4:
Using Bitwise Operations in Floating-Point Code / 3.11.5:
Floating-Point Comparison Operations / 3.11.6:
Observations about Floating-Point Code / 3.11.7:
Processor Architecture / 3.12:
The Y86-64 Instruction Set Architecture / 4.1:
Programmer-Visible State / 4.1.1:
Y86-64 Instructions / 4.1.2:
Instruction Encoding / 4.1.3:
Y86-64 Exceptions / 4.1.4:
Y86-64 Programs / 4.1.5:
Some Y86-64 Instruction Details / 4.1.6:
Logic Design and the Hardware Control Language HCL / 4.2:
Logic Gates / 4.2.1:
Combinational Circuits and HCL Boolean Expressions / 4.2.2:
Word-Level Combinational Circuits and HCL Integer Expressions / 4.2.3:
Set Membership / 4.2.4:
Memory and Clocking / 4.2.5:
Sequential Y86-64 Implementations / 4.3:
Organizing Processing into Stages / 4.3.1:
SEQ Hardware Structure / 4.3.2:
SEQ Timing / 4.3.3:
SEQ Stage Implementations / 4.3.4:
General Principles of Pipelining / 4.4:
Computational Pipelines / 4.4.1:
A Detailed Look at Pipeline Operation / 4.4.2:
Limitations of Pipelining / 4.4.3:
Pipelining a System with Feedback / 4.4.4:
Pipelined Y86-64 Implementations / 4.5:
SEQ+: Rearranging the Computation Stages / 4.5.1:
Inserting Pipeline Registers / 4.5.2:
Rearranging and Relabeling Signals / 4.5.3:
Next PC Prediction / 4.5.4:
Pipeline Hazards / 4.5.5:
Exception Handling / 4.5.6:
PIPE Stage Implementations / 4.5.7:
Pipeline Control Logic / 4.5.8:
Performance Analysis / 4.5.9:
Unfinished Business / 4.5.10:
Y86-64 Simulators / 4.6:
Optimizing Program Performance / 5:
'Capabilities and Limitations of Optimizing Compilers / 5.1:
Expressing Program Performance / 5.2:
Program Example / 5.3:
Eliminating Loop Inefficiencies / 5.4:
Reducing Procedure Calls / 5.5:
Eliminating Unneeded Memory References / 5.6:
Understanding Modern Processors / 5.7:
Overall Operation / 5.7.1:
Functional Unit Performance / 5.7.2:
An Abstract Model of Processor Operation / 5.7.3:
Loop Unrolling / 5.8:
Enhancing Parallelism / 5.9:
Multiple Accumulators / 5.9.1:
Reassociation Transformation / 5.9.2:
Summary of Results for Optimizing Combining Code / 5.10:
Some Limiting Factors / 5.11:
Register Spilling / 5.11.1:
Branch Prediction and Misprediction Penalties / 5.11.2:
Understanding Memory Performance / 5.12:
Load Performance / 5.12.1:
Store Performance / 5.12.2:
Life in the Real World: Performance Improvement Techniques / 5.13:
Identifying and Eliminating Performance Bottlenecks / 5.14:
Program Profiling / 5.14.1:
Using a Profiler to Guide Optimization / 5.14.2:
The Memory Hierarchy / 5.15:
Storage Technologies / 6.1:
Random Access Memory / 6.1.1:
Disk Storage / 6.1.2:
Solid State Disks / 6.1.3:
Storage Technology Trends / 6.1.4:
Locality / 6.2:
Locality of References to Program Data / 6.2.1:
Locality of Instruction Fetches / 6.2.2:
Summary of Locality / 6.2.3:
Caching in the Memory Hierarchy / 6.3:
Summary of Memory Hierarchy Concepts / 6.3.2:
Cache Memories / 6.4:
Generic Cache Memory Organization / 6.4.1:
Direct-Mapped Caches / 6.4.2:
Set Associative Caches / 6.4.3:
Fully Associative Caches / 6.4.4:
Issues with Writes / 6.4.5:
Anatomy of a Real Cache Hierarchy / 6.4.6:
Performance Impact of Cache Parameters / 6.4.7:
Writing Cache-Friendly Code / 6.5:
Putting it Together: The Impact of Caches on Program Performance / 6.6:
The Memory Mountain / 6.6.1:
Rearranging Loops to Increase Spatial Locality / 6.6.2:
Exploiting Locality in Your Programs / 6.6.3:
Running Programs on a System / 6.7:
Linking / 7:
Compiler Drivers / 7.1:
Static Linking / 7.2:
Object Files / 7.3:
Relocatable Object Files / 7.4:
Symbols and Symbol Tables / 7.5:
Symbol Resolution / 7.6:
How Linkers Resolve Duplicate Symbol Names / 7.6.1:
Linking with Static Libraries / 7.6.2:
How Linkers Use Static Libraries to Resolve References / 7.6.3:
Relocation / 7.7:
Relocation Entries / 7.7.1:
Relocating Symbol References / 7.7.2:
Executable Object Files / 7.8:
Loading Executable Object Files / 7.9:
Dynamic Linking with Shared Libraries / 7.10:
Loading and Linking Shared Libraries from Applications / 7.11:
Position-Independent Code (PIC) / 7.12:
Library Interpositioning / 7.13:
Compile-Time Interpositioning / 7.13.1:
Link-Time Interpositioning / 7.13.2:
Run-Time Interpositioning / 7.13.3:
Tools for Manipulating Object Files / 7.14:
Exceptional Control Flow / 7.15:
Exceptions / 8.1:
Classes of Exceptions / 8.1.1:
Exceptions in Linux/x86-64 Systems / 8.1.3:
Logical Control Flow / 8.2:
Concurrent Flows / 8.2.2:
Private Address Space / 8.2.3:
User and Kernel Modes / 8.2.4:
Context Switches / 8.2.5:
System Call Error Handling / 8.3:
Process Control / 8.4:
Obtaining Process IDs / 8.4.1:
Creating and Terminating Processes / 8.4.2:
Reaping Child Processes / 8.4.3:
Putting Processes to Sleep / 8.4.4:
Loading and Running Programs / 8.4.5:
Using fork and execve to Run Programs / 8.4.6:
Signals / 8.5:
Signal Terminology / 8.5.1:
Sending Signals / 8.5.2:
Receiving Signals / 8.5.3:
Blocking and Unblocking Signals / 8.5.4:
Writing Signal Handlers / 8.5.5:
Synchronizing Flows to Avoid Nasty Concurrency Bugs / 8.5.6:
Explicitly Waiting for Signals / 8.5.7:
Nonlocal Jumps / 8.6:
Tools for Manipulating Processes / 8.7:
Physical and Virtual Addressing / 8.8:
Address Spaces / 9.2:
VM as a Tool for Caching / 9.3:
DRAM Cache Organization / 9.3.1:
Page Tables / 9.3.2:
Page Hits / 9.3.3:
Page Faults / 9.3.4:
Allocating Pages / 9.3.5:
Locality to the Rescue Again / 9.3.6:
VM as a Tool for Memory Management / 9.4:
VM as a Tool for Memory Protection / 9.5:
Address Translation / 9.6:
Integrating Caches and VM / 9.6.1:
Speeding Up Address Translation with a TLB / 9.6.2:
Multi-Level Page Tables / 9.6.3:
Putting it Together: End-to-End Address Translation / 9.6.4:
Case Study: The Intel Core i7/Linux Memory System / 9.7:
Core i7 Address Translation / 9.7.1:
Linux Virtual Memory System / 9.7.2:
Memory Mapping / 9.8:
Shared Objects Revisited / 9.8.1:
The fork Function Revisited / 9.8.2:
The execve Function Revisited / 9.8.3:
User-Level Memory Mapping with the mmap Function / 9.8.4:
Dynamic Memory Allocation / 9.9:
Themalloc and free Functions / 9.9.1:
Why Dynamic Memory Allocation? / 9.9.2:
Allocator Requirements and Goals / 9.9.3:
Fragmentation / 9.9.4:
Implementation Issues / 9.9.5:
Implicit Free Lists / 9.9.6:
Placing Allocated Blocks / 9.9.7:
Splitting Free Blocks / 9.9.8:
Getting Additional Heap Memory / 9.9.9:
Coalescing Free Blocks / 9.9.10:
Coalescing with Boundary Tags / 9.9.11:
Putting it Together: Implementing a Simple Allocator / 9.9.12:
Explicit Free Lists / 9.9.13:
Segregated Free Lists / 9.9.14:
Garbage Collection / 9.10:
Garbage Collector Basics / 9.10.1:
Mark&Sweep Garbage Collectors / 9.10.2:
Conservative Mark&Sweep for C Programs / 9.10.3:
Common Memory-Related Bugs in C Programs / 9.11:
Dereferencing Bad Pointers / 9.11.1:
Reading Uninitialized Memory / 9.11.2:
Allowing Stack Buffer Overflows / 9.11.3:
Assuming that Pointers and the Objects They Point to Are the Same Size / 9.11.4:
Making Off-by-One Errors / 9.11.5:
Referencing a Pointer Instead of the Object it Points to / 9.11.6:
Misunderstanding Pointer Arithmetic / 9.11.7:
Referencing Nonexistent Variables / 9.11.8:
Referencing Data in Free Heap Blocks / 9.11.9:
Introducing Memory Leaks / 9.11.10:
Interaction and Communication between Programs / 9.12:
System-Level I/O / 10:
Unix I/O / 10.1:
Opening and Closing Files / 10.2:
Reading and Writing Files / 10.4:
Robust Reading and Writing with the Rio Package / 10.5:
Rio Unbuffered Input and Output Functions / 10.5.1:
Rio Buffered Input Functions / 10.5.2:
Reading File Metadata / 10.6:
Reading Directory Contents / 10.7:
Sharing Files / 10.8:
I/O Redirection / 10.9:
Standard I/O / 10.10:
Putting it Together: Which I/O Functions Should I Use? / 10.11:
Network Programming / 10.12:
The Client-Server Programming Model / 11.1:
Networks / 11.2:
The Global IP Internet / 11.3:
IP Addresses / 11.3.1:
Internet Domain Names / 11.3.2:
Internet Connections / 11.3.3:
The Sockets Interface / 11.4:
Socket Address Structures / 11.4.1:
The socket Function / 11.4.2:
The connect Function / 11.4.3:
The bind Function / 11.4.4:
The listen Function / 11.4.5:
The accept Function / 11.4.6:
Host and Service Conversion / 11.4.7:
Helper Functions for the Sockets Interface / 11.4.8:
Example Echo Client and Server / 11.4.9:
Web Servers / 11.5:
Web Basics / 11.5.1:
Web Content / 11.5.2:
HTTP Transactions / 11.5.3:
Serving Dynamic Content / 11.5.4:
Putting it Together: The Tiny Web Server / 11.6:
Concurrent Programming / 11.7:
Concurrent Programming with Processes / 12.1:
A Concurrent Server Based on Processes / 12.1.1:
Pros and Cons of Processes / 12.1.2:
Concurrent Programming with I/O Multiplexing / 12.2:
A Concurrent Event-Driven Server Based on I/O Multiplexing / 12.2.1:
Pros and Cons of I/O Multiplexing / 12.2.2:
Concurrent Programming with Threads / 12.3:
Thread Execution Model / 12.3.1:
Posix Threads / 12.3.2:
Creating Threads / 12.3.3:
Terminating Threads / 12.3.4:
Reaping Terminated Threads / 12.3.5:
Detaching Threads / 12.3.6:
Initializing Threads / 12.3.7:
A Concurrent Server Based on Threads / 12.3.8:
Shared Variables in Threaded Programs / 12.4:
Threads Memory Model / 12.4.1:
Mapping Variables to Memory / 12.4.2:
Shared Variables / 12.4.3:
Synchronizing Threads with Semaphores / 12.5:
Progress Graphs / 12.5.1:
Semaphores / 12.5.2:
Using Semaphores for Mutual Exclusion / 12.5.3:
Using Semaphores to Schedule Shared Resources / 12.5.4:
Putting it Together: A Concurrent Server Based on Prethreading / 12.5.5:
Using Threads for Parallelism / 12.6:
Other Concurrency Issues / 12.7:
Thread Safety / 12.7.1:
Reentrancy / 12.7.2:
Using Existing Library Functions in Threaded Programs / 12.7.3:
Races / 12.7.4:
Deadlocks / 12.7.5:
Error Handling / 12.8:
Error Handling in Unix Systems / A.1:
Error-Handling Wrappers / A.2:
References
Index
Preface
About the Authors
A Tour of Computer Systems / 1:
文献の複写および貸借の依頼を行う
 文献複写・貸借依頼