๐ŸŽˆ Operating System Quiz

jaypyonยท2021๋…„ 4์›” 17์ผ
1

์šด์˜์ฒด์ œ

๋ชฉ๋ก ๋ณด๊ธฐ
11/11
post-thumbnail

Introduction

In a multiprogramming and time-sharing environment, several users share the system simultaneously. This situation can result in various security problems

a. What are two such problems?
b. Can we ensure the same degree of security in a time-shared machine as in a dedicated machine? Explain your answer.

  1. ์ž์›์˜ ๋ฌด๋‹จ ๋ณต์ œ(stealing, copying)์™€, ์ˆ˜์ •์ด ๊ฐ€๋Šฅํ•˜๋‹ค. ์ ์ ˆํ•œ ๊ณ„์ •์—†์ด ์ปดํ“จํ„ฐ๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค๋Š” ๋ฌธ์ œ๊ฐ€ ์žˆ๋‹ค.
  2. ๋ถˆ๊ฐ€๋Šฅํ•˜๋‹ค. 1๋ฒˆ์—์„œ ์„œ์ˆ ํ•œ ๋‘๊ฐ€์ง€์˜ ๋ฌธ์ œ์ ๊ณผ, ๋‹ค์ˆ˜์˜ ์ธ์›์ด ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฐ”์ด๋Ÿฌ์Šค์—๋„ ์ทจ์•ฝํ•˜๋‹ค. dedicated machine์„ ์‚ฌ์šฉํ•˜์ž.

The issue of resource utilization shows up in different forms in different types of operating systems. List what resources must be managed carefully in the following settings:

a. Mainframe or minicomputer systems : CPU, Main Memory, Storage
b. Workstations connected to servers : Network
c. Handheld computers : ์ „๋ ฅ ์†Œ๋น„๋Ÿ‰

Under what circumstances would a user be better off using a time-sharing system rather than a PC or a single-user workstation?

์ž‘์—…์ด ๋„ˆ๋ฌด ํฐ ๊ฒฝ์šฐ single user workstation ๋ณด๋‹ค ์ปดํ“จํŒ… ํŒŒ์›Œ๊ฐ€ ํฐ ์žฅ๋น„๋กœ ์ˆ˜ํ–‰ํ•˜๋Š” time-sharing system์ด ์ ํ•ฉํ•  ๊ฒƒ ๊ฐ™๋‹ค. ๋‹ค๋งŒ ๋„ˆ๋ฌด ๋งŽ์ง€ ์•Š์€ ์‚ฌ์šฉ์ž๊ฐ€ time-sharing ํ•ด์•ผํ•œ๋‹ค๋Š” ์ œ์•ฝ์กฐ๊ฑด์ด ๋”ฐ๋ฅธ๋‹ค.

Identify which of the functionalities listed below need to be supported by the operating system for (a) handheld devices and (b) real-time systems.

a. Batch programming
b. Virtual memory : (a),(b)
c. Time sharing : (b)

Describe the differences between symmetric and asymmetric multiprocessing. What are three advantages and one disadvantage of multiprocessor systems?

  • ๋น ๋ฅธ ์†๋„, ์—ฌ๋Ÿฌ๊ฐœ์˜ ์‹ฑ๊ธ€ ํ”„๋กœ์„ธ์„œ๋ณด๋‹ค ์œ ์ง€๋ณด์ˆ˜๊ฐ€ ์‰ฌ์›€, ํ•œ ํ”„๋กœ์„ธ์„œ๊ฐ€ ์ฃฝ์–ด๋„ ๋™์ž‘ํ•จ. ๊ฐ๊ฐ์˜ ์ฝ”์–ด๋Š” ๋…๋ฆฝ์ ์œผ๋กœ ์ž‘๋™์ด ๊ฐ€๋Šฅํ•จ.(์Šค์ผ€์ฅด๋Ÿฌ๊ฐ€ ์—ฌ๋Ÿฌ๊ฐœ ์กด์žฌํ•  ์ˆ˜ ์žˆ์Œ)
  • NUMA๋Š” ๋กœ์ปฌ ๋ฉ”๋ชจ๋ฆฌ๊ฐ€ ์•„๋‹Œ ๋‹ค๋ฅธ ๋ฉ”๋ชจ๋ฆฌ์— ์ ‘๊ทผํ• ๋•Œ ๋งํฌ๋ฅผ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฉ”๋ชจ๋ฆฌ์— ๋Œ€ํ•œ ์ ‘๊ทผ ์†๋„๊ฐ€ UMA์— ๋น„ํ•ด์„œ ๋Š๋ฆผ. ์Šค์ผ€์ฅด๋ง์ด ๋ณต์žกํ•จ.

How do clustered systems differ from multiprocessor systems? What is required for two machines belonging to a cluster to cooperate to provide a highly available service?

๋™๊ธฐ, ๋น„๋™๊ธฐ์‹ ํด๋Ÿฌ์Šคํ„ฐ๋ง์ด ์ œ๊ณต๋œ๋‹ค๋ฉด ๊ณ ๊ฐ€์šฉ์„ฑ์„ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋‹ค. ๋™๊ธฐ์‹์˜ ๊ฒฝ์šฐ Active-Active ์ƒํƒœ๋ฅผ ์œ ์ง€ํ•˜๋ฉฐ ๋น„๋™๊ธฐ์‹ ํด๋Ÿฌ์Šคํ„ฐ๋ง์˜ ๊ฒฝ์šฐ Active-Standby ์ƒํƒœ๋ฅผ ์œ ์ง€ํ•œ๋‹ค. ๋‘˜ ๋‹ค ํ•œ์ชฝ์ด ์‹คํŒจํ–ˆ์„ ๋•Œ ํ•ด๋‹น ์žฅ์น˜๋ฅผ ๋Œ€์ฒดํ•˜์—ฌ ๋™์ž‘ํ•  ์ˆ˜ ์žˆ๋‹ค. clustered system์€ ์—ฌ๋Ÿฌ๊ฐœ์˜ ์žฅ์น˜์˜ ๊ตฐ์ง‘์„ ๋œปํ•˜๊ณ , ๋ฉ€ํ‹ฐํ”„๋กœ์„ธ์„œ ์‹œ์Šคํ…œ์€ ํ•œ๋Œ€์˜ ๋ฌผ๋ฆฌ ์žฅ์น˜๋กœ ๊ตฌ์„ฑ๋œ๋‹ค.

Distinguish between the clientโ€“server and peer-to-peer models of distributed systems.

server๋Š” client์˜ ์š”์ฒญ์— ์‘๋‹ต ๋ฐ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ตฌ์กฐ์ด๋‹ค. peer๋Š” ๊ทธ ์ž์ฒด๋กœ ์„œ๋ฒ„์ด์ž ํด๋ผ์ด์–ธํŠธ์˜ ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•œ๋‹ค. ํŒŒ์ผ์„ ๊ณต์œ ํ•  ๋•Œ peer to peer ๊ตฌ์กฐ์—์„œ๋Š” ์—ฌ๋Ÿฌ peer๋กœ๋ถ€ํ„ฐ ํŒŒ์ผ์„ ๋‹ค์šด๋กœ๋“œ ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ์†๋„๊ฐ€ ๋น ๋ฅด๋‹ค.(๋น„ํŠธํ† ๋ ŒํŠธ) ๋˜ํ•œ ๋†’์€ ์€๋‹‰์„ฑ๋„ ์ œ๊ณตํ•œ๋‹ค.(ํ† ๋ฅด ๋ธŒ๋ผ์šฐ์ €).

Consider a computing cluster consisting of two nodes running a database. Describe two ways in which the cluster software can manage access to the data on the disk. Discuss the benefits and disadvantages of each.

๋น„๋™๊ธฐ์‹/๋™๊ธฐ์‹ ํด๋Ÿฌ์Šคํ„ฐ๋ง์œผ๋กœ ํ•œ๊ฐœ์˜ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค๋ฅผ ๊ตฌ์„ฑํ•˜๊ณ  ํ•ด๋‹น ๋‚ด์šฉ์„ duplicateํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ๊ณ ๊ฐ€์šฉ์„ฑ์„ ์ง€์›ํ•˜๋Š” ๋ฐฉ์‹์˜ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค ์šด์šฉ ๋ฐฉ์‹์ด ์žˆ๊ณ , ๋‹ค๋ฅธ ํ•˜๋‚˜๋Š” ๋ณ‘๋ ฌ ํด๋Ÿฌ์Šคํ„ฐ๋ง์„ ์ด์šฉํ•˜์—ฌ ๊ณต์œ DB๋ฅผ ๊ตฌ์„ฑํ•˜๋Š” ๋ฐฉ์‹์ด ์žˆ๋‹ค. ๋ณ‘๋ ฌ ํด๋Ÿฌ์Šคํ„ฐ๋ง์˜ ๊ฒฝ์šฐ ๋‘ ๋Œ€์˜ ์ปดํ“จํŒ… ์ž์›์„ ๋ชจ๋‘ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒŒ ์ด์ ์ด์ง€๋งŒ ๊ณต์œ  ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค์˜ ํŠน์„ฑ์ƒ ๋ฝํ‚น๊ธฐ๋ฒ•์ด ํ•„์š”ํ•˜๋ฉฐ ๋ฝํ‚น์—๋Š” ์˜ค๋ฒ„ํ—ค๋“œ๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค๋Š”๊ฒŒ ๋‹จ์ ์ด๋‹ค. ๋™๊ธฐ/๋น„๋™๊ธฐ์‹์˜ ๊ฒฝ์šฐ ๊ณ ๊ฐ€์šฉ์„ฑ ์ž์ฒด๊ฐ€ ์ด์ ์ด์ง€๋งŒ ๋‘ ๋Œ€์˜ ์ปดํ“จํŒ…์ž์›์„ ๋ชจ๋‘ ํ™œ์šฉํ•˜์ง€๋Š” ๋ชปํ•œ๋‹ค๋Š” ์ ์ด ๋‹จ์ ์ด๋‹ค.

How are network computers different from traditional personal computers? Describe some usage scenarios in which it is advantageous to use network computers.

NC๋Š” ๊ฒฝ๋Ÿ‰ํ™”๋œ ์šด์˜์ฒด์ œ์™€ ์ตœ์†Œํ•œ์˜ ๋ฉ”๋ชจ๋ฆฌ, CPU๋ฅผ ๊ฐ–์ถ”๋ฉด ๋˜๋ฉฐ ๊ต‰์žฅํžˆ ๊ฐ€๊ฒฉ์ด ์‹ธ๋‹ค. ์ด๋Š” ์ „ํ†ต์ ์ธ PC๊ฐ€ ์‚ฌ์šฉ์ž๊ฐ€ ์‚ฌ์šฉํ•˜๊ธฐ ํŽธ๋ฆฌํ•œ ์šด์˜์ฒด์ œ๋ฅผ ๊ฐ–์ถ”๊ณ  ๋†’์€ ๋ฉ”๋ชจ๋ฆฌ์™€ ๊ณ ์„ฑ๋Šฅ CPU๋ฅผ ๊ฐ–์ถ”๋Š” ๊ฒƒ๊ณผ๋Š” ๋Œ€์กฐ๋œ๋‹ค. NC๋Š” ๋„คํŠธ์›Œํฌ๋งŒ ์ง€์›๋œ๋‹ค๋ฉด ์›๊ฒฉ์ง€์˜ ์ˆ˜ํผ์ปดํ“จํ„ฐ ์ž์›์„ ํ• ๋‹น๋ฐ›์•„์„œ ์ด์šฉํ•˜๋Š” ๋ฐฉ์‹์ด๊ณ , ๊ฐ€๊ฒฉ์ด ์‹ผ NC์˜ ํŠน์„ฑ์ƒ ์–ด๋””์—์„œ๋„ ์†์‰ฝ๊ฒŒ ํ•˜๋“œ์›จ์–ด๋ฅผ ๊ตฌ์ž…ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๊ฐ„๋‹จํ•œ ๋กœ๊ทธ์ธ์œผ๋กœ ์ ‘๊ทผ์ด ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ์žฅ์ ์„ ๊ฐ€์ง„๋‹ค. ๋˜ํ•œ NC๋Š” ์›๊ฒฉ์ง€์˜ ์ˆ˜ํผ์ปดํ“จํ„ฐ๋ฅผ ๋‹ค์ˆ˜์˜ ์‚ฌ์šฉ์ž๊ฐ€ ์ด์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ž์›์ด ๋†€๊ณ ์žˆ๋Š” ์ƒํƒœ๊ฐ€ ์ค„์–ด๋“ ๋‹ค.

What is the purpose of interrupts? What are the differences between a trap and an interrupt? Can traps be generated intentionally by a user program? If so, for what purpose?

์ธํ„ฐ๋ŸฝํŠธ๋Š” ํ•˜๋“œ์›จ์–ด์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์‹ ํ˜ธ์ด๋‹ค. ์ธํ„ฐ๋ŸฝํŠธ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด ๋ฌธ๋งฅ๊ตํ™˜์ด ์ผ์–ด๋‚˜๊ณ  ์ธํ„ฐ๋ŸฝํŠธ ์„œ๋น„์Šค ๋ฃจํ‹ด์„ ์‹คํ–‰ํ•œ๋‹ค. ํŠธ๋žฉ์€ ์ผ๋ฐ˜์ ์œผ๋กœ ์†Œํ”„ํŠธ์›จ์–ด์—์„œ ๋ฐœ์ƒํ•œ ์ธํ„ฐ๋ŸฝํŠธ๋ผ๊ณ  ๋ถˆ๋ฆฌ๋ฉฐ ์œ ์ €ํ”„๋กœ๊ทธ๋žจ์—์„œ๋„ ์˜๋„์ ์œผ๋กœ ๋ฐœ์ƒ ๊ฐ€๋Šฅํ•˜๋‹ค. ์˜๋„์ ์œผ๋กœ arithmetic error๋ฅผ ์ผ์œผํ‚ค๋Š” ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๋Š” ํ–‰์œ„๊ฐ€ ์˜๋„์  ๋ฐœ์ƒ์˜ ์˜ˆ์ด๋‹ค.

Direct memory access is used for high-speed I/O devices in order to avoid increasing the CPUโ€™s execution load.

a. How does the CPU interface with the device to coordinate the transfer?
CPU์—์„œ DMA Controller์—๊ฒŒ ์‹ ํ˜ธ๋ฅผ ๋ณด๋‚ด๋ฉด DMAC๋Š” ์ž‘์—…์„ ์‹œ์ž‘ํ•˜๊ณ  ์ž‘์—…์ด ๋๋‚˜๋ฉด CPU์—๊ฒŒ ์ธํ„ฐ๋ŸฝํŠธ๋กœ ํ†ต์ง€ํ•œ๋‹ค.

b. How does the CPU know when the memory operations are complete?
์ž‘์—…์ด ์ข…๋ฃŒ๋˜์—ˆ๋‹ค๋Š” Interrupt๋ฅผ ํ†ตํ•ด CPU์—๊ฒŒ ์•Œ๋ฆฌ๊ฒŒ ๋œ๋‹ค.

c. The CPU is allowed to execute other programs while the DMA controller is transferring data. Does this process interfere with the excution of the user programs? If so, describe what forms of interference are caused.
DMA๊ฐ€ ๋ฉ”๋ชจ๋ฆฌ์— ์ ‘๊ทผํ• ๋•Œ ๋ฉ”๋ชจ๋ฆฌ ๋ฒ„์Šค์—์„œ ์‹ธ์ดํด ์Šคํ‹ธ๋ง์ด ๋ฐœ์ƒํ•˜๋ฉด DMA์˜ ์šฐ์„ ์ˆœ์œ„๊ฐ€ ๋†’๊ธฐ๋•Œ๋ฌธ์— CPU์—์„œ๋Š” ๋ฉ”๋ชจ๋ฆฌ๋กœ์˜ ์ ‘๊ทผ์ด ๋ถˆ๊ฐ€๋Šฅํ•˜๋‹ค. ๋”ฐ๋ผ์„œ CPU์˜ ์ •์ƒ์ ์ธ User program ์šด์šฉ์— ๋ฌธ์ œ๊ฐ€ ์ƒ๊ธธ ์ˆ˜ ์žˆ๋‹ค.

Give two reasons why caches are useful. What problems do they solve? What problems do they cause? If a cache can be made as large as the device for which it is caching (for instance, a cache as large as a disk), why not make it that large and eliminate the device?

  • ์บ์‹œ๋Š” ๋ฉ”๋ชจ๋ฆฌ๋กœ์˜ ์•ก์„ธ์Šค๋ฅผ ์ค„์—ฌ์ค€๋‹ค. ์บ์‹œ์˜ ์ด์šฉ์€ ์ ‘๊ทผ ์†๋„๊ฐ€ ๋น ๋ฅธ ์ €์žฅ์žฅ์น˜์ด๊ธฐ ๋•Œ๋ฌธ์— ์ „์ฒด ์ปดํ“จํ„ฐ ์ฒ˜๋ฆฌ์†๋„์˜ ํ–ฅ์ƒ์ด ์ด๋ฃจ์–ด์ง„๋‹ค.
  • ๋น ๋ฅธ ์žฅ์น˜์ธ CPU์™€ ๋Š๋ฆฐ ์žฅ์น˜ ์‚ฌ์ด์˜ ์ง€์—ฐ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•œ๋‹ค.
  • NUMA ๋ชจ๋ธ์—์„œ P1 ํ”„๋กœ์„ธ์„œ๊ฐ€ 5๋ผ๋Š” ๊ฐ’์„ ๊ฐ€์ง„ A๋ฅผ ๊ฐ€์ ธ์™€์„œ P1์˜ ๋กœ์ปฌ์บ์‹œ์— ์ €์žฅํ–ˆ๊ณ , ๊ทธ ๋‹ค์Œ์œผ๋กœ P2 ํ”„๋กœ์„ธ์„œ๊ฐ€ A๋ฅผ ์ž์‹ ์˜ ๋กœ์ปฌ์บ์‹œ์— ์ €์žฅํ–ˆ๋‹ค. ์งํ›„ P1์€ A๋ฅผ 10์œผ๋กœ ๋ณ€๊ฒฝํ•œ๋‹ค๊ณ  ํ•ด๋„ P2์— ๋‚จ์•„์žˆ๋Š” A์˜ ๊ฐ’์€ ๊ทธ๋Œ€๋กœ๋ผ๋Š” ๋ฌธ์ œ๊ฐ€ ์กด์žฌํ•œ๋‹ค.
  • ๋งŒ์•ฝ ์บ์‹œ๊ฐ€ ํœ˜๋ฐœ์„ฑ์ด ์•„๋‹ˆ๋ฉฐ ๊ฐ€๊ฒฉ์ด ํ•ฉ๋‹นํ•˜๋‹ค๋ฉด ์ €์žฅ์†Œ๋ฅผ ๋Œ€์ฒดํ•  ์ˆ˜ ์žˆ๋‹ค.

Consider an SMP system similar to what is shown in Figure 1.6. Illustrate with an example how data residing in memory could in fact have two different values in each of the local caches

์œ„ ๋ฌธ์ œ์˜ data update์™€ ๋™์ผํ•œ ๋‚ด์šฉ์ด๋‹ค.

Discuss, with examples, how the problem of maintaining coherence of cached data manifests itself in the following processing environments:

a. Single-processor systems
b. Multiprocessor systems
c. Distributed systems

a. ํ”„๋กœ์„ธ์„œ๊ฐ€ ์บ์‹œ ๊ฐ’์„ ๋ณ€๊ฒฝํ•˜๋ฉด ๋ฉ”๋ชจ๋ฆฌ์— ์—…๋ฐ์ดํŠธ ํ•œ๋‹ค.
b. ํ”„๋กœ์„ธ์„œ๊ฐ€ ๋กœ์ปฌ์บ์‹œ ๊ฐ’์„ ๋ณ€๊ฒฝํ•˜๋ฉด ๋‹ค๋ฅธ ํ”„๋กœ์„ธ์„œ๋“ค์˜ ๋กœ์ปฌ์บ์‹œ ๊ฐ’๋„ ๋ณ€๊ฒฝ ๋˜๋Š” ์‚ฌ์šฉ๋ถˆ๊ฐ€ ์ƒํƒœ๊ฐ€ ๋˜์–ด์•ผ ํ•˜๋ฉฐ ํ•ด๋‹น ๋‚ด์šฉ์ด ๋ฉ”๋ชจ๋ฆฌ์— ์—…๋ฐ์ดํŠธ ๋˜์–ด์•ผํ•œ๋‹ค.
c. ๋ณ„๋„์˜ ์—…๋ฐ์ดํŠธ๋Š” ํ•˜์ง€ ์•Š๋Š”๋‹ค. ํด๋ผ์ด์–ธํŠธ์˜ ์บ์‹œ ํŒŒ์ผ ๋ฐ์ดํ„ฐ์— ์˜ํ•ด ๋ถˆ์ผ์น˜ ํ˜„์ƒ์ด ๋‚˜ํƒ€๋‚  ์ˆ˜๋„ ์žˆ๋‹ค.

Describe a mechanism for enforcing memory protection in order to prevent a program from modifying the memory associated with other programs

base and limit register๋ฅผ ์ด์šฉํ•˜์—ฌ ๋ฉ”๋ชจ๋ฆฌ ๋ณดํ˜ธ๋ฅผ ์ง„ํ–‰ํ•œ๋‹ค.

base register - process์˜ ์‹œ์ž‘์ง€์ 
limit register - ์‹œ์ž‘์ง€์ ๋ถ€ํ„ฐ ๋ฉ”๋ชจ๋ฆฌ์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๋Š” ์ง€์ ์„ ํ‘œ์‹œํ•œ๋‹ค.
base <= ์ฃผ์†Œ <= base+limit

Operating system structure

The services and functions provided by an operating system can be divided into two main categories. Briefly describe the two categories and discuss how they differ.

  • ํ”„๋กœ์„ธ์Šค์˜ ๋ณดํ˜ธ
  • ํ•˜๋“œ์›จ์–ด๊ฐ€ ์ œ๊ณตํ•˜์ง€ ๋ชปํ•˜๋Š” ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ์˜ ์ œ๊ณต (ํŒŒ์ผ์‹œ์Šคํ…œ, ๊ฐ€์ƒ๋ฉ”๋ชจ๋ฆฌ ๋“ฑ)

Describe three general methods for passing parameters to the operating system.

  1. ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๋ ˆ์ง€์Šคํ„ฐ์— ์ „๋‹ฌํ•œ๋‹ค.
  2. ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๋ฉ”๋ชจ๋ฆฌ๋‚ด์˜ ๋ถˆ๋ก์ด๋‚˜ ํ…Œ์ด๋ธ”์— ์ €์žฅํ•˜๊ณ  ๋ธ”๋ก์˜ ์ฃผ์†Œ๊ฐ’์„ ๋ ˆ์ง€์Šคํ„ฐ์— ์ „๋‹ฌํ•œ๋‹ค.
  3. ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ํ”„๋กœ๊ทธ๋žจ์— ์˜ํ•ด ์Šคํƒ์— Push๋˜๊ณ , OS์— ์˜ํ•ด Pop๋œ๋‹ค.
  • OS๋Š” ๋ธ”๋ก์ด๋‚˜ ์Šคํƒ๋ฐฉ์‹ (2),(3)๋ฒˆ์„ ์„ ํ˜ธํ•œ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์ „๋‹ฌ๋˜๋Š” ๋งค๊ฐœ๋ณ€์ˆ˜์˜ ๊ฐœ์ˆ˜๋‚˜ ๊ธธ์ด๋ฅผ ์ œํ•œํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค.

What are the five major activities of an operating system in regard to file management?

ํŒŒ์ผ์˜ ์ƒ์„ฑ, ํŒŒ์ผ์˜ ์‚ญ์ œ, ๋””๋ ‰ํ„ฐ๋ฆฌ ์ƒ์„ฑ, ๋””๋ ‰ํ„ฐ๋ฆฌ ์‚ญ์ œ, The application's symbolic instructions need to be translated into the machine-level instructions either by an interpreter or by compiling the application code, ํŒŒ์ผ ๋ฐฑ์—…, ์˜๊ตฌ ์ €์žฅ์†Œ๋กœ ๋งคํ•‘

Would it be possible for the user to develop a new command interpreter using the system-call interface provided by the operating system?

Processes

Describe the differences among short-term, medium-term, and long-term scheduling.

Short-term(CPU scheduler) : ๋ฉ”๋ชจ๋ฆฌ๋กœ๋ถ€ํ„ฐ ready ์ƒํƒœ์— ์žˆ๋Š” job ๋“ค์„ ์„ ํƒํ•ด์„œ CPU๋ฅผ ํ• ๋‹นํ•˜๋Š” ์Šค์ผ€์ฅด๋ง
medium-term : ํŠนํžˆ ์‹œ๋ถ„ํ•  ์‹œ์Šคํ…œ์—์„œ ์‚ฌ์šฉ๋˜๋Š” medium-term scheduler๋Š” CPU๊ฒฝ์Ÿ์™„ํ™”๋ฅผ ์œ„ํ•˜์—ฌ ๋ฉ”๋ชจ๋ฆฌ์—์„œ ํ”„๋กœ์„ธ์Šค๋“ค์„ ์ œ๊ฑฐํ•˜๋ฉฐ, ๋‹ค์ค‘ ํ”„๋กœ๊ทธ๋ž˜๋ฐ์˜ ์ •๋„๋ฅผ ์™„ํ™”ํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์ถ”ํ›„ ํ”„๋กœ์„ธ์Šค๋ฅผ ๋ฉ”๋ชจ๋ฆฌ๋กœ ๋ถˆ๋Ÿฌ์™€์„œ ์ค‘๋‹จ๋œ ์ง€์ ๋ถ€ํ„ฐ ์žฌ๊ฐœํ•˜๋Š” swapping๊ธฐ๋ฒ•์„ ์‚ฌ์šฉํ•œ๋‹ค.
long-term : ๋ณด์กฐ๊ธฐ์–ต์žฅ์น˜๋กœ๋ถ€ํ„ฐ ์ฃผ๊ธฐ์–ต์žฅ์น˜๋กœ job์„ loadํ•˜์—ฌ ํ•ด๋‹น ํ”„๋กœ์„ธ์Šค๋ฅผ ready ์ƒํƒœ๋กœ ๋งŒ๋“ ๋‹ค.
์ฃผ์š”ํ•œ ์ฐจ์ด๋Š” ์‹คํ–‰ ๋นˆ๋„์ˆ˜์ด๋‹ค. short-term์€ ์ƒˆ๋กœ์šด ํ”„๋กœ์„ธ์Šค๋ฅผ ์ž์ฃผ ์„ ํƒํ•ด์•ผํ•˜๊ณ , long-term์€ job์ด ๋๋‚˜๋Š” ์‹œ๊ฐ„์„ ๊ธฐ๋‹ค๋ ค์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค๋ฅธ job์„ ์ˆ˜์šฉํ•˜๋Š”๋ฐ์— ์ƒ๋Œ€์ ์œผ๋กœ ๊ธด ์‹œ๊ฐ„์ด ๊ฑธ๋ฆฐ๋‹ค.

Describe the actions taken by a kernel to context-switch between processes.

์ปค๋„์€ ํ˜„์žฌ ์‹คํ–‰์ค‘์ธ ํ”„๋กœ์„ธ์Šค์˜ ์ƒํƒœ๋ฅผ ์ €์žฅํ•ด์•ผ๋งŒ ํ•œ๋‹ค. ๋˜, ๋‹ค์Œ์œผ๋กœ ์‹คํ–‰๋  ํ”„๋กœ์„ธ์Šค์˜ ์ƒํƒœ๋ฅผ restore ํ•ด์•ผํ•œ๋‹ค.

The OS must save the PC and user stack pointer of the currently executing process, in response to a clock interrupt and transfers control to the kernel clock interrupt handler

Saving the rest of the registers, as well as other machine state, such as the state of the floating point registers, in the process PCB is done by the clock interrupt handler.

The scheduler to determine the next process to execute is invoked the OS.

Then the state of the next process from its PCB is retrieved by OS and restores the registers. The restore operation takes the processor back to the state in which the previous process was previously interrupted, executing in user code with user-mode privileges.

Construct a process tree similar to Figure 3.8. To obtain process information for the UNIX or Linux system, use the command ps -ael. Use the command man ps to get more information about the ps command. On Windows systems, you will have to use the task manager.

Including the initial parent process, how many processes are created by the program shown in Figure 3.32?

๋„ค ๋ฒˆ์˜ fork()๊ฐ€ ์ง„ํ–‰๋œ๋‹ค.

#1 2๊ฐœ์˜ ํ”„๋กœ์„ธ์Šค๋กœ ๋ถ„๊ธฐํ•œ๋‹ค.
#2 2๊ฐœ์˜ ํ”„๋กœ์„ธ์Šค๊ฐ€ ๊ฐ๊ฐ 2๊ฐœ๋กœ ๋ถ„๊ธฐํ•˜์—ฌ ์ด 4๊ฐœ๊ฐ€ ๋œ๋‹ค.
#3 4๊ฐœ์˜ ํ”„๋กœ์„ธ์Šค๊ฐ€ ๊ฐ๊ฐ 2๊ฐœ๋กœ ๋ถ„๊ธฐํ•˜์—ฌ ์ด 8๊ฐœ๊ฐ€ ๋œ๋‹ค.
#4 8๊ฐœ์˜ ํ”„๋กœ์„ธ์Šค๊ฐ€ ๊ฐ๊ฐ 2๊ฐœ๋กœ ๋ถ„๊ธฐํ•˜์—ฌ ์ด 16๊ฐœ๊ฐ€ ๋œ๋‹ค.

Explain the circumstances under which which the line of code marked printf("LINE J") in Figure 3.33 will be reached.

ํ•ด๋‹น ์ฝ”๋“œ์— ๋„๋‹ฌํ•  ์ˆ˜ ์žˆ๋Š” ํ™˜๊ฒฝ์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.
fork() ์‹œ์Šคํ…œ ์ฝœ๋กœ ๋ถ„๊ธฐ๋œ ์ž์‹ ํ”„๋กœ์„ธ์Šค๋Š” pid๊ฐ€ 0์ด๋‹ค. ๋”ฐ๋ผ์„œ ์ž์‹ ํ”„๋กœ์„ธ์Šค์—์„œ๋Š” ls ๋ช…๋ น์–ด๋ฅผ ์‹คํ–‰์‹œํ‚จ ํ›„ โ€œLINE Jโ€๋ฅผ ์ถœ๋ ฅํ•˜๊ฒŒ ๋œ๋‹ค. ๋ถ€๋ชจ ํ”„๋กœ์„ธ์Šค๋Š” wait(NULL)๋กœ ์ž์‹ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ข…๋ฃŒ๋  ๋•Œ๊นŒ์ง€ ๊ธฐ๋‹ค๋ ธ๋‹ค๊ฐ€ โ€œChild Completeโ€๋ฅผ ์ถœ๋ ฅํ•œ๋‹ค.

Using the program in Figure 3.34, identify the values of pid at lines A, B, C, and D. (Assume that the actual pids of the parent and child are 2600 and 2603, respectively.)

A=0, B=2603, C=2603, D=2600

Give an example of a situation in which ordinary pipes are more suitable than named pipes and an example of a situation in which named pipes are more suitable than ordinary pipes.

์‰˜ ๋งŒ๋“ค๊ธฐ ๊ณผ์ œ๋ฅผ ํ•  ๋•Œ, pipe๋ฅผ ๊ตฌํ˜„ํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•œ๋‹ค๋ฉด, ordinary pipe๊ฐ€ ์ ๋‹นํ•  ๊ฒƒ์ด๋ผ๊ณ  ์ƒ๊ฐํ•œ๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์‹คํ–‰๋œ ์ „ ๋ช…๋ น์–ด๋ฅผ ์ด์šฉํ•˜์—ฌ ๋‹ค์Œ ๋ช…๋ น์–ด์—์„œ ์‚ฌ์šฉํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํ•ด๋‹น ๋‚ด์šฉ์ด ์ €์žฅ๋˜์–ด ๋‚จ์„ ํ•„์š”๊ฐ€ ์—†๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ํ•˜์ง€๋งŒ ๋งŒ์•ฝ ๋ณด์•ˆ ๊ด€์ ์—์„œ Accountability๋ฅผ ์œ„ํ•ด ์‚ฌ์šฉ์ž๊ฐ€ ์ž…๋ ฅํ•œ ๋ชจ๋“  ๋ช…๋ น์–ด๋ฅผ logging ํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•˜๋Š” ์ƒํ™ฉ์ด๋ผ๋ฉด named pipe๊ฐ€ ์ ๋‹นํ•  ๊ฒƒ์œผ๋กœ ๋ณด์ธ๋‹ค.

Consider the RPC mechanism. Describe the undesirable circumstances that could arise from not enforcing either the โ€œat most onceโ€ or โ€œexactly onceโ€ semantics. Describe possible uses for a mechanism that had neither of these guarantees

๋‘ ๋ช…๋ น์–ด ๋ชจ๋‘ ํ•œ๋ฒˆ์˜ ํ˜ธ์ถœ ์„ ์˜๋ฏธํ•˜์ง€๋งŒ, ์‹คํŒจ์‹œ์˜ ๋™์ž‘์ด ๋‹ค๋ฅด๋‹ค.
at-least-once์˜ ๊ฒฝ์šฐ ์‹œ์Šคํ…œ์ด ํ•ด๋‹น ํ•จ์ˆ˜ ํ˜ธ์ถœ์ด ์„ฑ๊ณตํ–ˆ๋‹ค๋Š” ์‚ฌ์‹ค์„ ์•Œ ๋•Œ๊นŒ์ง€ ๋ฐ˜๋ณตํ•˜๋ฏ€๋กœ ๊ฒฐ์ œ ์—…๋ฌด ๋“ฑ์— ์ ์šฉ๋˜๋ฉด ์•ˆ๋œ๋‹ค. ๋ฐ์ดํ„ฐ ๋ฒ ์ด์Šค์˜ ์—…๋ฐ์ดํŠธ์™€ ๊ฐ™์ด ์ตœ์‹ ์˜ ๊ฐ’์œผ๋กœ ๋ฎ์–ด์จ๋„ ๋˜๋Š” ๊ฒฝ์šฐ ์‚ฌ์šฉํ•˜๋ฉด ์ข‹๋‹ค. ๋ฐ˜๋ฉด at-most-once์˜ ๊ฒฝ์šฐ ์‹ค์ˆ˜๋กœ ๋‹ค๋ฅธ ์‚ฌ๋žŒ์—๊ฒŒ ๋‘๋ฒˆ ๊ฒฐ์ œ ํ•˜๊ณ ์‹ถ์ง€ ์•Š์„๋•Œ ์‚ฌ์šฉํ•œ๋‹ค. exactly-once์˜ at-most-once์™€ ๋น„์Šทํ•˜๋‹ค. ํ•˜์ง€๋งŒ ๋‹ค๋ฅธ์ ์ด ์žˆ๋‹ค๋ฉด at-most-once๋Š” ACK๋ฅผ ๋ฐ›์•˜์„ ๋•Œ ๋” ์ด์ƒ ์žฌ์ „์†ก ํ•˜์ง€ ์•Š๋Š” ๋ฐฉ์‹์ด๋ผ๋ฉด exactly-once ๋ฐฉ์‹์€ cache๊ฐ€ ์กด์žฌํ•œ๋‹ค๋ฉด ํ•ด๋‹น ๊ฐ’์„ ๋Œ๋ ค์ฃผ์–ด duplicate ํ˜„์ƒ์„ ๋ฐฉ์ง€ํ•œ๋‹ค.

At-least-once

At-most-once

Exactly-once

sing the program shown in Figure 3.35, explain what the output will be at lines X and Y.

What are the benefits and detriments of each of the following? Consider both the systems and the programmersโ€™ levels.

a. Synchronous and asynchronous communication
๋™๊ธฐ์  ์†ก์‹ ์€ ๊ตฌํ˜„์ด ๊ฐ„๋‹จํ•˜๊ณ  ๋ณ„๋‹ค๋ฅธ ์ด๋ฒคํŠธ์— ๋Œ€ํ•œ ํ•ธ๋“ค๋ง์ด ํ•„์š” ์—†๋‹ค. ํ•˜์ง€๋งŒ blocking ๋ฐฉ์‹์œผ๋กœ ํ•œ ๋ฉ”์‹œ์ง€๋ฅผ ์ฃผ๊ณ  ๋ฐ›์„ ๋•Œ ๋‹ค๋ฅธ ๋™์ž‘์„ ํ•˜์ง€ ๋ชปํ•œ๋‹ค๋Š” ๋‹จ์ ์ด ์žˆ๋‹ค. (rendezvous) ๋ฐ˜๋ฉด ๋น„๋™๊ธฐ์  ํ†ต์‹ ์€ ๋‹ค๋ฅธ ์ž‘์—…๋„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๋ณ„๋„์˜ ์ด๋ฒคํŠธ ํ•ธ๋“ค๋ง์ด ํ•„์š”ํ•˜๋‹ค.

b. Automatic and explicit buffering
์ž๋™ ๋ฒ„ํผ๋ง์€ ํšจ์œจ์ ์ธ ์ž์› ํ™œ์šฉ์ด ๊ฐ€๋Šฅํ•˜๋ฉฐ, ์ „์†ก์ž๊ฐ€ block์„ ํ•  ํ•„์š”๊ฐ€ ์—†๋‹ค. ๋ช…์‹œ์  ๋ฒ„ํผ๋ง์€ ํ๊ฐ€ ๋‹ค ์ฐจ๋ฉด blockํ•ด์•ผํ•œ๋‹ค๋Š” ๋‹จ์ ์ด ์žˆ์œผ๋‚˜ ์ž˜ ์กฐ์ •ํ•œ๋‹ค๋ฉด ์ž๋™ ๋ฒ„ํผ๋ง๋ณด๋‹ค ํšจ์œจ์ ์ผ ์ˆ˜ ์žˆ๋‹ค.

c. Send by copy and send by reference
๋ณต์‚ฌ ์›๋ณธ์ด ๋‚จ๊ธฐ ๋•Œ๋ฌธ์— ์•ˆ์ „์„ฑ์„ ํ™•๋ณดํ•  ์ˆ˜ ์žˆ์œผ๋‚˜ ๋ฉ”๋ชจ๋ฆฌ์˜ ๋‚ญ๋น„๊ฐ€ ์ผ์–ด๋‚œ๋‹ค.
๋ฐ˜๋Œ€๋กœ ์ฐธ์กฐ๋Š” ์˜๋„์น˜ ์•Š์€ ๊ฐ’ ๋ณ€๊ฒฝ์ด๋ผ๋Š” ๋ถ€์ž‘์šฉ์ด ์žˆ์„ ์ˆ˜ ์žˆ๋‹ค.

d. Fixed-sized and variable-sized messages
๊ณ ์ • ํฌ๊ธฐ์˜ ๋ฉ”์‹œ์ง€๋Š” ๊ตฌํ˜„์ด ์‰ฝ๊ณ , ์‹œ์Šคํ…œ ์—ฐ์‚ฐ์˜ ์ตœ์ ํ™”๊ฐ€ ํ•ด๋‹น ์‚ฌ์ด์ฆˆ์— ๋งž์ถ”์–ด ๊ฐ€๋Šฅํ•ด์ง„๋‹ค.
๋ฐ˜๋Œ€๋กœ ๊ฐ€๋ณ€ ํฌ๊ธฐ ๋ฉ”์‹œ์ง€๋Š” ๊ณต๊ฐ„์˜ ํšจ์œจ์  ํ™œ์šฉ๊ณผ ๋ฉ”๋ชจ๋ฆฌ ๋ฒ”์œ„ ๋‚ด์˜ ๋ชจ๋“  ํฌ๊ธฐ ๋ฉ”์‹œ์ง€๋ฅผ ์ „์†กํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์žฅ์ ์ด ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ์‹œ์Šคํ…œ ์—ฐ์‚ฐ์˜ ์ตœ์ ํ™”๋Š” ๋ถˆ๊ฐ€๋Šฅํ•˜๋‹ค.

Thread

Provide two programming examples in which multithreading does not provide better performance than a single-threaded solution

์ˆœ์ฐจ์ ์œผ๋กœ ์‹คํ–‰๋˜๋Š” ํ”„๋กœ๊ทธ๋žจ์˜ ๊ฒฝ์šฐ ๋ฉ€ํ‹ฐ์“ฐ๋ ˆ๋”ฉ์€ ์ ํ•ฉํ•˜์ง€ ์•Š๋‹ค.
๋ฐ˜๋ฉด, shell ๋งŒ๋“ค๊ธฐ ๊ณผ์ œ์™€ ๊ฐ™์ด ๋ฐฑ๊ทธ๋ผ์šด๋“œ ์‹คํ–‰์„ ์ง€์›ํ•˜๋ฉฐ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์‚ฌ์šฉ์ž์™€ ์ปค๋ฎค๋‹ˆ์ผ€์ด์…˜์„ ํ•˜๋Š” ํ”„๋กœ๊ทธ๋žจ์ด๋ผ๋ฉด ๋ฉ€ํ‹ฐ ์“ฐ๋ ˆ๋”ฉ์ด ํ•„์š”ํ•˜๋‹ค.

Describe the actions taken by a thread library to context switch between user-level threads.

ํ”„๋กœ์„ธ์Šค์˜ ๋ฌธ๋งฅ ๊ตํ™˜๊ณผ๋„ ์ƒ๋‹นํžˆ ๋น„์Šทํ•˜์ง€๋งŒ ๋” ์ ์€ ์˜ค๋ฒ„ํ—ค๋“œ๋ฅผ ๊ฐ€์ง„๋‹ค. stack ์˜์—ญ์„ ์ œ์™ธํ•œ ๋ชจ๋“  ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ๊ณต์œ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— Context switching ๋ฐœ์ƒ์‹œ Stack ์˜์—ญ๋งŒ ๋ณ€๊ฒฝํ•˜์—ฌ ์ง„ํ–‰ํ•˜๋ฉด ๋œ๋‹ค.

Under what circumstances does a multithreaded solution using multiple kernel threads provide better performance than a single-threaded solution on a single-processor system?

๋‹จ์ผ ์Šค๋ ˆ๋“œ ํ”„๋กœ๊ทธ๋žจ์€ ์š”๊ตฌ๊ฐ€ ๋„์ฐฉํ•  ๋•Œ๋งˆ๋‹ค ์ƒˆ๋กœ์šด ํ”„๋กœ์„ธ์Šค๋ฅผ ์ƒ์„ฑํ•˜๊ณ 
๋ฌธ๋งฅ๊ตํ™˜์— ์˜ค๋ฒ„ํ—ค๋“œ๊ฐ€ ๋ฐœ์ƒํ•˜์ง€๋งŒ, ๋‹ค์ค‘ ์Šค๋ ˆ๋“œ๋ผ๋ฉด ํ•ด๋‹น ์š”๊ตฌ์— ๋Œ€ํ•ด์„œ ์Šค๋ ˆ๋“œ๋กœ ์ฒ˜๋ฆฌ
ํ•˜๋ฏ€๋กœ ๋ฌธ๋งฅ๊ตํ™˜ ๋น„์šฉ์ด ๊ฐ์†Œํ•œ๋‹ค.

Which of the following components of program state are shared across threads in a multithreaded process?

a. Register values
b. Heap memory
c. Global variables
d. Stack memory

b,c

Can a multithreaded solution using multiple user-level threads achieve better performance on a multiprocessor system than on a single-processor system?

์‚ฌ์šฉ์ž ์ˆ˜์ค€ ์Šค๋ ˆ๋“œ๋Š” OS์—์„œ ๋‹จ์ผ ํ”„๋กœ์„ธ์Šค๋กœ๋งŒ ํ•ด์„๋˜๋ฉฐ ๋ณ„๋„์˜ ํ”„๋กœ์„ธ์„œ์—์„œ ํ”„๋กœ์„ธ์Šค์˜ ๋‹ค๋ฅธ ์Šค๋ ˆ๋“œ๋ฅผ ์Šค์ผ€์ฅด๋งํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ์ด๋Š”์ด ์ƒํ™ฉ์—์„œ ์„ฑ๋Šฅ์ƒ์˜ ์ด์ ์ด ์—†์Œ์„ ๊ฐ•๋ ฅํžˆ ์‹œ์‚ฌํ•ฉ๋‹ˆ๋‹ค.

As described in Section 4.5.2, Linux does not distinguish between processes and threads. Instead, Linux treats both in the same way, allowing a task to be more akin to a process or a thread depending on the set of flags passed to the clone() system call. However, many operating systemsโ€”such as Windows XP and Solarisโ€”treat processes and threads differently. Typically, such systems use a notation wherein the data structure for a process contains pointers to the separate threads belonging to the process. Contrast these two approaches for modeling processes and threads within the kernel.

์–ด๋–ค ์Šค๋ ˆ๋“œ๊ฐ€ ์–ด๋–ค ํ”„๋กœ์„ธ์Šค์— ํ•ด๋‹นํ•˜๋Š”์ง€ ์‹๋ณ„ํ•˜๊ณ  Accounting์„ ํ•  ๋•Œ ์ถ”๊ฐ€์ ์ธ ๋ณต์žก์„ฑ์ด ์š”๊ตฌ๋œ๋‹ค. ํ•˜์ง€๋งŒ ์Šค์ผ€์ฅด๋ง์‹œ ๊ตฌ๋ณ„ํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์— ๋” ๋‹จ์ˆœํ•˜๊ฒŒ ๊ตฌํ˜„ ํ•  ์ˆ˜ ์žˆ๋‹ค.
fork() ๋ช…๋ น์–ด๋ฅผ ์ˆ˜ํ–‰ํ•˜๋ฉด ๋ถ€๋ชจ ํ”„๋กœ์„ธ์Šค์˜ ์ž๋ฃŒ๊ตฌ์กฐ๋ฅผ ๋ณต์‚ฌํ•˜์ง€๋งŒ clone() ๋ช…๋ น์–ด๋ฅผ ์ˆ˜ํ–‰ํ•˜๋ฉด ์ „๋‹ฌ๋œ ํ”Œ๋ž˜๊ทธ๊ฐ€ ๋‹ค๋ฅด๋ฉฐ, ๋ถ€๋ชจ ํ”„๋กœ์„ธ์Šค์˜ ์ž๋ฃŒ๊ตฌ์กฐ๋ฅผ ํฌ์ธํŒ… ํ•œ๋‹ค.

In Chapter 3, we discussed Googleโ€™s Chrome browser and its practice of opening each new website in a separate process. Would the same benefits have been achieved if instead Chrome had been designed to open each new website in a separate thread? Explain.

ํ•œ ์Šค๋ ˆ๋“œ์—์„œ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด ํ”„๋กœ์„ธ์Šค ์ž์ฒด์˜ ์˜ค๋ฅ˜๋กœ ๋ฒˆ์ง€๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค๋ฅธ ์Šค๋ ˆ๋“œ๋“ค๋„ ์ „๋ถ€ ์ข…๋ฃŒ๋˜๋Š” ๋ถ€์ž‘์šฉ์ด ์ƒ๊ธด๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐœ๋ณ„ ํ”„๋กœ์„ธ์Šค๋กœ ์ž‘์šฉํ•˜๋Š”๊ฒŒ ๋งž๋‹ค.

Is it possible to have concurrency but not parallelism? Explain. (๋™์‹œ์ˆ˜ํ–‰์„ฑ, ๋ณ‘๋ ฌ์„ฑ)

์‚ฌ์šฉ์ž์˜ ์ธ์ง€๋ณด๋‹ค ๋น ๋ฅด๊ฒŒ ๋ฌธ๋งฅ๊ตํ™˜์„ ํ•œ๋‹ค๋ฉด concurrencyํ•˜๊ฒŒ ๋Š๋‚„ ์ˆ˜ ์žˆ๋‹ค.

Using Amdahlโ€™s Law, calculate the speedup gain of an application that has a 60 percent parallel component for (a) two processing cores and (b) four processing cores.

์•”๋‹ฌ์˜ ๋ฒ•์น™
S๋Š” ์ˆœ์ฐจ์ˆ˜ํ–‰ ๋ถ€๋ถ„์ด๊ธฐ ๋•Œ๋ฌธ์— 40percent์ธ 0.4์ด๊ณ , N์€ a์—์„œ 2, b์—์„œ 4์ด๋‹ค.
๊ฐ๊ฐ์˜ ๊ณ„์‚ฐ ๊ฒฐ๊ณผ๊ฐ’์€ (a) <= 1 / 0.4+0.3 = ์•ฝ 1.42 , (b) <= 1 / 0.4+0.15 = ์•ฝ 1.81

Determine if the following problems exhibit task or data parallelism

๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ์„ฑ
๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ์„ฑ์€ ์ „์ฒด ๋ฐ์ดํ„ฐ๋ฅผ ์„œ๋ธŒ ๋ฐ์ดํ„ฐ๋“ค๋กœ ๋‚˜๋ˆˆ ํ›„
์„œ๋ธŒ ๋ฐ์ดํ„ฐ๋“ค์„ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌํ•˜์—ฌ ์ž‘์—…์„ ๋น ๋ฅด๊ฒŒ ์ˆ˜ํ–‰ํ•˜๋Š” ๊ฒƒ์„ ๋งํ•œ๋‹ค.

ํƒœ์Šคํฌ ๋ณ‘๋ ฌ์„ฑ
ํƒœ์Šคํฌ ๋ณ‘๋ ฌ์„ฑ์€ ์„œ๋กœ ๋‹ค๋ฅธ ์ž‘์—…์„ ๋ณ‘๋ ฌ ์ฒ˜๋ฆฌํ•˜๋Š” ๊ฒƒ์„ ๋งํ•œ๋‹ค.
ex) ์›น ์„œ๋ฒ„๋Š” ๊ฐ๊ฐ์˜ ํด๋ผ์ด์–ธํŠธ์—์„œ ์š”์ฒญํ•œ ๋‚ด์šฉ์„ ์Šค๋ ˆ๋“œ๋กœ ๋ณ‘๋ ฌ๋กœ ์ฒ˜๋ฆฌํ•œ๋‹ค.

โ€ข The multithreaded statistical program described in Exercise 4.21
์„ธ๊ฐœ์˜ ์Šค๋ ˆ๋“œ๋กœ ๊ฐ๊ฐ์˜ ๊ฐ’์„ ๊ตฌํ•˜๋Š” ํƒœ์Šคํฌ ๋ณ‘๋ ฌ์„ค
โ€ข The multithreaded Sudoku validator described in Project 1 in this
chapter 2๊ฐœ์˜ ์Šค๋ ˆ๋“œ๋กœ ํ–‰๋ ฌ์„ ๊ฒ€์‚ฌ(ํƒœ์Šคํฌ ๋ณ‘๋ ฌ์„ฑ)ํ•˜๊ณ , 9๊ฐœ์˜ ์Šค๋ ˆ๋“œ๋กœ 3x3 9๊ฐœ์˜ ๋ถ€๋ถ„์„ ๊ฐ๊ฐ ๊ฒ€์‚ฌ(๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ์„ฑ)ํ•œ๋‹ค.
โ€ข The multithreaded sorting program described in Project 2 in this

chapter ์ „์ฒด ๋ฐ์ดํ„ฐ๋ฅผ 2๊ฐœ๋กœ devide(๋ฐ์ดํ„ฐ ๋ณ‘๋ ฌ์„ฑ)ํ•˜์—ฌ ์ •๋ ฌํ•˜๊ณ  ์Šค๋ ˆ๋“œ๋ฅผ mergeํ•œ๋‹ค.

โ€ข The multithreaded web server described in Section 4.1
์„œ๋ฒ„๋Š” ํด๋ผ์ด์–ธํŠธ๋กœ๋ถ€ํ„ฐ ์š”์ฒญ์ด ๋“ค์–ด์˜ค๋ฉด ๊ฐœ๋ณ„ ์Šค๋ ˆ๋“œ๋กœ ์ฒ˜๋ฆฌํ•œ๋‹ค (ํƒœ์Šคํฌ ๋ณ‘๋ ฌ์„ฑ)

A system with two dual-core processors has four processors available for scheduling. A CPU-intensive application is running on this system. All input is performed at program start-up, when a single file must be opened. Similarly, all output is performed just before the programExercises 193 terminates, when the program results must be written to a single file. Between startup and termination, the program is entirely CPU-bound. Your task is to improve the performance of this application by multithreading it. The application runs on a system that uses the one-to-one threading model (each user thread maps to a kernel thread).

๋‘ ๊ฐœ์˜ ์ด์ค‘ ์ฝ”์–ด ์ฒ˜๋ฆฌ๊ธฐ๋Š” ์Šค์ผ€์ค„ ๊ฐ€๋Šฅํ•œ 4๊ฐœ์˜ ์ฒ˜๋ฆฌ๊ธฐ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ๋‹ค. CPU-bound application์ด ์‹œ์Šคํ…œ์—์„œ ์‹คํ–‰์ค‘์ด๋ผ๊ณ  ํ•˜์ž. ๋ชจ๋“  ์ž…๋ ฅ์€ ํ”„๋กœ๊ทธ๋žจ์ด ์‹œ์ž‘ํ•  ๋•Œ ์ฃผ์–ด์ง€๊ณ  ๋ฐ˜๋“œ์‹œ ํ•˜๋‚˜์˜ ํŒŒ์ผ์ด ์—ด๋ ค์•ผ ํ•œ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ชจ๋“  ์ถœ๋ ฅ์€ ํ”„๋กœ๊ทธ๋žจ์ด ์ข…๋ฃŒ๋˜๊ธฐ ์ „์— ์ˆ˜ํ–‰๋œ๋‹ค. ํ”„๋กœ๊ทธ๋žจ์˜ ๊ฒฐ๊ณผ๋Š” ํ•˜๋‚˜์˜ ํŒŒ์ผ์— ๋ชจ๋‘ ๊ธฐ๋ก๋˜์–ด์•ผํ•œ๋‹ค. ํ”„๋กœ๊ทธ๋žจ์ด ์‹œ์ž‘ํ•ด์„œ ์ข…๋ฃŒํ•  ๋•Œ๊นŒ์ง€ ํ”„๋กœ๊ทธ๋žจ์€ CPU ์ง‘์ค‘์ ์ด๋‹ค. ์—ฌ๋Ÿฌ๋ถ„์€ ์ด ํ”„๋กœ๊ทธ๋žจ์„ ๋‹ค์ค‘ ์Šค๋ ˆ๋“œํ™”ํ•˜์—ฌ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œ์ผœ์•ผํ•œ๋‹ค. application์€ ์ผ๋Œ€์ผ ์Šค๋ ˆ๋“œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜๋ฉฐ ๊ฐ ์‚ฌ์šฉ์ž ์ˆ˜์ค€ ์Šค๋ ˆ๋“œ๋Š” ์ปค๋„์Šค๋ ˆ๋“œ์— ์‚ฌ์ƒ๋œ๋‹ค.

โ€ข How many threads will you create to perform the input and output?
Explain. ์ธํ’‹ 1๊ฐœ ์Šค๋ ˆ๋“œ, ์•„์›ƒํ’‹ 1๊ฐœ ์Šค๋ ˆ๋“œ
โ€ข How many threads will you create for the CPU-intensive portion of
the application? Explain. 4๊ฐœ ์Šค๋ ˆ๋“œ ๋ชจ๋‘ ํ• ๋‹น.

Consider the following code segment


a. How many unique processes are created?
6๊ฐœ์˜ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ƒ์„ฑ๋œ๋‹ค.
b. How many unique threads are created?
8๊ฐœ์˜ ์Šค๋ ˆ๋“œ๊ฐ€ ์ƒ์„ฑ๋œ๋‹ค. (4๊ฐœ์˜ ๋‹จ์ผ ์Šค๋ ˆ๋“œ ํ”„๋กœ์„ธ์Šค + 2๊ฐœ์˜ ๋”๋ธ” ์Šค๋ ˆ๋“œ ํ”„๋กœ์„ธ์Šค)

โ€ข The initial call to fork() (labeled โ€œ1โ€ above) creates a copy of the original process (2 processes at this point)
โ€ข The second call to fork() (labeled โ€œ2โ€ above) is executed only by the child process from the first fork() call. (3 processes at this point)
โ€ข There are now two processes executing the code inside the if statement, meaning both of those processes call thread_create() (3 processes, 2 threads)
o What should have been made clear in the original assignment is that each newly created thread starts a different function than the one currently executingโ€”one of the arguments to a thread_create() function is a pointer to the function to begin executing. The threads therefore donโ€™t move on to the last fork() call.
โ€ข All three processes execute the final call to fork() (labeled โ€œ4โ€ above), so each process copies itself at that point (6 processes, 2 threads)
However, Peter brought to my attention the fact that the above solution does not account for the fact that each process starts as a single thread of execution. A better solution would be to say that this code segment creates a total of six processes and eight threads (two threads created by calls
to thread_create() and six threads corresponding to the six single-threaded processes).

The program shown in Figure 4.16 uses the Pthreads API. What would be the output from the program at LINE C and LINE P?


attr : ์Šค๋ ˆ๋“œ ํŠน์„ฑ ์ง€์ •
pthread_create : ์Šค๋ ˆ๋“œ ์ƒ์„ฑ, ํŠนํžˆ ์„ธ๋ฒˆ์งธ ์ธ์ž๋Š” ์Šค๋ ˆ๋“œ ์‹œ์ž‘์‹œ ์‹คํ–‰ํ•  ์Šค๋ ˆ๋“œ ํ•จ์ˆ˜.
pthread_join : ์Šค๋ ˆ๋“œ์˜ ์ข…๋ฃŒ๋ฅผ ๊ธฐ๋‹ค๋ฆฐ๋‹ค.

child : 5, parent : 0

Consider a multicore system and a multithreaded program written using the many-to-many threading model. Let the number of user-level threads in the program be greater than the number of processing cores in the system. Discuss the performance implications of the following scenarios.

a. The number of kernel threads allocated to the program is less than
the number of processing cores.
๋ชจ๋“  ์ปค๋„ ์Šค๋ ˆ๋“œ๋Š” 1๋Œ€1๋กœ ๋Œ€์‘ํ•˜์—ฌ ์„œ๋น„์Šค๋ฅผ ๋ฐ›๊ณ , ์ผ๋ถ€ ํ”„๋กœ์„ธ์‹ฑ ์ฝ”์–ด๋Š” idle ์‹œ๊ฐ„์„ ๊ฐ€์ง„๋‹ค.

b. The number of kernel threads allocated to the program is equal to
the number of processing cores.
๋ชจ๋“  ์ปค๋„์Šค๋ ˆ๋“œ๋Š” ํ”„๋กœ์„ธ์„œ์™€ 1๋Œ€1 ๋Œ€์‘ํ•˜์—ฌ ์„œ๋น„์Šค๋ฅผ ๋ฐ›๋Š”๋‹ค. ๊ทธ๋ ‡์ง€๋งŒ ๋งŒ์•ฝ ์ปค๋„์Šค๋ ˆ๋“œ์— ๋ด‰์‡„(block)๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค๋ฉด (ํŽ˜์ด์ง€ ๋ถ€์žฌ, ์‹œ์Šคํ…œ ์ฝœ ๋“ฑ์˜ ์ด์œ ๋กœ) ๋Œ€์‘ํ•˜๋Š” ํ”„๋กœ์„ธ์„œ๋Š” idle ์‹œ๊ฐ„์„ ๊ฐ€์ง€๊ฒŒ ๋˜๊ฒ ๋‹ค.

c. The number of kernel threads allocated to the program is greater
than the number of processing cores but less than the number of
user-level threads.
์„œ๋น„์Šค ๋ฐ›์œผ๋ ค๊ณ  ๊ฒฝ์Ÿ์ด ์‹œ์ž‘๋œ๋‹ค. pcs์™€ scs๊ฐ€ ๋‘˜ ๋‹ค ๋ฐœ์ƒํ•จ.

โ€ข PCS(Process contention scope) : ํ”„๋กœ์„ธ์Šค ๊ฒฝ์Ÿ ๋ฒ”์œ„(PSC)์ด๋‹ค. (LWP ํฌํ•จ)
โ€ข SCS(System contention scope) : ์ปค๋„ ์Šค๋ ˆ๋“œ ๊ฐ„์˜ ๊ฒฝ์Ÿ ๋ฒ”์œ„

Pthreads provides an API for managing thread cancellation. The pthread setcancelstate() function is used to set the cancellation state. Its prototype appears as follows:The two possible values for the state are PTHREAD CANCEL ENABLE and PTHREAD CANCEL DISABLE. Using the code segment shown in Figure 4.17, provide examples of two operations that would be suitable to perform between the calls to disable and enable thread cancellation.

CPU Scheduling

Why is it important for the scheduler to distinguish I/O-bound programs from CPU-bound programs?

I/O ๋ฐ”์šด๋“œ ํ”„๋กœ๊ทธ๋žจ์€ CPU์˜ ์ ์œ ์œจ์ด ๋‚ฎ๊ณ  ์ž์ฃผ I/O ์ฒ˜๋ฆฌ๋ฅผ ํ•˜๋Ÿฌ ๊ฐ€๊ธฐ ๋•Œ๋ฌธ์— CPU Burst time์ด ์งง๊ณ , CPU ๋ฐ”์šด๋“œ ํ”„๋กœ๊ทธ๋žจ์€ Burst time์ด ๊ธธ๋‹ค. ๋”ฐ๋ผ์„œ CPU ๋ฐ”์šด๋“œ ํ”„๋กœ๊ทธ๋žจ์ด CPU๋ฅผ ์˜ค๋žซ๋™์•ˆ ์ ์œ ํ•˜๋ฉด ๋‹ค๋ฅธ ํ”„๋กœ์„ธ์Šค๋“ค์ด ์˜ค๋žซ๋™์•ˆ ๊ธฐ๋‹ค๋ฆฌ๋Š” Convoy effect(ํ˜ธ์œ„ ํšจ๊ณผ)๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค.

Discuss how the following pairs of scheduling criteria conflict in certain settings.

a. CPU utilization and response time
CPU ์‚ฌ์šฉ๋Ÿ‰์„ ๋Š˜๋ฆฌ๋ ค๋ฉด Context switch๋ฅผ ์ตœ์†Œํ™”ํ•˜๋ฉด ๋˜์ง€๋งŒ, ๊ทธ๋Ÿด ๊ฒฝ์šฐ ํ•œ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์˜ค๋ž˜ CPU๋ฅผ ์ ์œ ํ•˜๋Š” ํ˜ธ์œ„ ํšจ๊ณผ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๊ธฐ๋•Œ๋ฌธ์— response time์€ ์ปค์ง€๊ฒŒ(๋Š๋ ค์ง€๊ฒŒ) ๋œ๋‹ค.

b. Average turnaround time and maximum waiting time
์งง์€ ์ž‘์—…๋ถ€ํ„ฐ ์Šค์ผ€์ฅด๋ง ํ•˜๊ณ  ์„œ๋น„์Šค ํ•œ๋‹ค๋ฉด ํ‰๊ท  turnaround time์€ ์ค„์–ด๋“ค์ง€๋งŒ, ๋ฌดํ•œ์ • waitingํ•˜๋Š” ๊ธฐ์•„์ƒํƒœ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋‹ค.

c. I/O device utilization and CPU utilization
CPU์˜ ์‚ฌ์šฉ๋Ÿ‰์„ ์ตœ๋Œ€ํ™”ํ•˜๋ ค๋ฉด CPU ๋ฐ”์šด๋“œ ํƒœ์Šคํฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š”๊ฒŒ ๋งž๊ณ , I/O์žฅ์น˜์˜ ๊ฒฝ์šฐ I/O ๋ฐ”์šด๋“œ ํƒœ์Šคํฌ๋ฅผ ์ˆ˜ํ–‰ํ•  ๋•Œ ์‚ฌ์šฉ๋Ÿ‰์ด ์ตœ๋Œ€๊ฐ€ ๋˜๊ณ  ๊ณผ์ •์—์„œ ๋ฌธ๋งฅ๊ตํ™˜์ด ๋งŽ์ด ์ผ์–ด๋‚˜๊ฒŒ ๋œ๋‹ค.

Consider the exponential average formula used to predict the length of the next CPU burst. What are the implications of assigning the following values to the parameters used by the algorithm?

implications

a. alpha๊ฐ€ 0 ์ด๋ผ๋Š” ์†Œ๋ฆฌ๋Š” ํ˜„์žฌ์˜ ๊ฐ’์— ๋Œ€ํ•ด์„œ ๋ฐ˜์˜์„ ํ•˜์ง€ ์•Š๊ฒ ๋‹ค๋Š” ๋œป์ด๊ธฐ ๋•Œ๋ฌธ์— burst time์€ ํ•ญ์ƒ 100์ด ๋‚˜์˜ค๊ฒŒ ๋œ๋‹ค.
b. alpha๊ฐ€ 0.99๋ผ๋Š” ๊ฒƒ์€ ํ˜„์žฌ์˜ ๊ฐ’์— ๋Œ€ํ•œ ๋ฐ˜์˜๋ฅ ์„ 99ํผ์„ผํŠธ๋กœ ํ•˜๊ณ  ๊ณผ๊ฑฐ์˜ ๋ฐ์ดํ„ฐ๋“ค์€ 1ํผ์„ผํŠธ๋งŒ ๋ฐ˜์˜ํ•œ๋‹ค๋Š” ์ด์•ผ๊ธฐ์ด๋‹ค. ์ด์ „์˜ burst ๊ฐ’์„ ๊ฑฐ์˜ ๋™์ผํ•˜๊ฒŒ ์‚ฌ์šฉํ•œ๋‹ค๋Š” ๋œป์ด ๋œ๋‹ค.

Consider the following set of processes, with the length of the CPU-burst time given in milliseconds:The processes are assumed to have arrived in the order P1, P2, P3, P4, P5, all at time 0.

a. Draw four Gantt charts illustrating the execution of these processes
using FCFS, SJF, a nonpreemptive priority (a smaller priority number implies a higher priority), and RR (quantum = 1) scheduling.
b. What is the turnaround time of each process for each of the scheduling algorithms in part a?
c. What is the waiting time of each process for each of the scheduling
algorithms in part a?
d. Which of the schedules in part a results in the minimal average
waiting time (over all processes)?

Which of the following scheduling algorithms could result in starvation?

a. First-come, first-served
b. Shortest job first
c. Round robin
d. Priority
b,d

Consider a variant of the RR scheduling algorithm where the entries in the ready queue are pointers to the PCBs.

a. What would be the effect of putting two pointers to the same process in the ready queue?
b. What would be the major advantages and disadvantages of this scheme?
c. How would you modify the basic RR algorithm to achieve the same effect without the duplicate pointers?

a. ํ”„๋กœ์„ธ์Šค๋Š” ๋” ์ž์ฃผ ์‹œ๊ฐ„์„ ํ• ๋‹น๋ฐ›๊ฒŒ ๋œ๋‹ค. ๋”ฐ๋ผ์„œ ์šฐ์„ ์ˆœ์œ„๊ฐ€ ๋†’์•„์ง„๋‹ค.
b : ์ค‘์š”ํ•œ ํ”„๋กœ์„ธ์Šค ์šฐ์„ ์ˆœ์œ„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ๋‹ค. ๋œ ์ค‘์š”ํ•œ ์ž‘์—…์€ ๋ฐ€๋ ค๋‚  ์ˆ˜ ์žˆ๋‹ค.
c : ๋†’์€ ์šฐ์„ ์ˆœ์œ„๋ฅผ ๊ฐ€์ง„ ํ”„๋กœ์„ธ์Šค์— ๊ธด ์‹œ๊ฐ„์„ ํ• ๋‹นํ•œ๋‹ค. RR ์Šค์ผ€์ค„์—์„œ ๋” ๋งŽ์€ ํ€€ํ…€์„ ์ค€๋‹ค.

Consider a system running ten I/O-bound tasks and one CPU-bound task. Assume that the I/O-bound tasks issue an I/O operation once for every millisecond of CPU computing and that each I/O operation takes 10 milliseconds to complete. Also assume that the context switching overhead is 0.1 millisecond and that all processes are long-running tasks.

What is the CPU utilization for a round-robin scheduler when:
a. The time quantum is 1 millisecond
b. The time quantum is 10 milliseconds

a. time quantum์ด 1ms ์ด๋ฏ€๋กœ, 1ms + 0.1ms = 1.1ms๊ฐ€ ๋˜๊ณ , ์‚ฌ์šฉ์‹œ๊ฐ„์€ 1ms๊ฐ€ ๋œ๋‹ค. ๋”ฐ๋ผ์„œ CPU ์‚ฌ์šฉ๋ฅ ์€ 1/1.1 100 = 91(%)์ด๋‹ค.
b. time quantum์ด 10ms ์ด๋‹ค. 10๊ฐœ์˜ 1.1ms์™€ 1๊ฐœ์˜ 10.1ms์˜ ์ž‘์—…์ด ์ด ๊ฑธ๋ฆฐ ์‹œ๊ฐ„์ด๊ณ , ์‹ค์ œ ์‚ฌ์šฉ ์‹œ๊ฐ„์€ 11
0.1ms์„ ๋บ€ 20ms์ด๋‹ค. ๋”ฐ๋ผ์„œ 20/21.1 *100 = 94(%) ์ด๋‹ค.

Consider a system implementing multilevel queue scheduling. What strategy can a computer user employ to maximize the amount of CPU time allocated to the userโ€™s process?

๋‹ต : OS ๊ด€์ ์—์„œ๋Š” User job ํ์˜ ์‹œ๊ฐ„ ํ• ๋‹น์„ ๋‹ค๋ฅธ ํ๋“ค๋ณด๋‹ค ๋Š˜๋ ค์ค€๋‹ค.

๋” ๋‚˜์•„๊ฐ€๊ธฐ : ์Šค์ผ€์ฅด๋ง ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ํƒ€์ž„์Šฌ๋ผ์ด์Šค๋ฅผ ์ฑ„์šฐ์ง€ ์•Š๊ณ  ๋๋‚œ ํ”„๋กœ์„ธ์Šค์— ๋Œ€ํ•ด์„œ ์šฐ์„ ์ˆœ์œ„๋ฅผ ๋†’์—ฌ์ฃผ๋Š” ๋ฐฉ์‹์œผ๋กœ ์„ค๊ณ„๋˜์—ˆ๋‹ค๋ฉด, ์ฆ‰ ๋ฉ€ํ‹ฐ๋ ˆ๋ฒจํ ์ค‘ ๋ฉ€ํ‹ฐ ๋ ˆ๋ฒจ ํ”ผ๋“œ๋ฐฑ ํ๋กœ์„œ ์„ค๊ณ„ ๋˜์—ˆ๋‹ค๋ฉด, ํ”„๋กœ๊ทธ๋žจ์€ ์‹œ๊ฐ„ ํ€€ํ…€์„ ์™„์ „ํžˆ ํ™œ์šฉํ•˜์ง€ ์•Š์Œ์œผ๋กœ์จ ํ• ๋‹น๋˜๋Š” CPU ์‹œ๊ฐ„์„ ์ตœ๋Œ€ํ™”ํ•  ์ˆ˜ ์žˆ๋‹ค. ํ• ๋‹น๋œ ํ€€ํ…€์˜ ๋งŽ์€ ๋ถ€๋ถ„์„ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ํ€€ํ…€์ด ๋๋‚˜๊ธฐ ์ „์— CPU๋ฅผ ํฌ๊ธฐํ•˜์—ฌ ํ”„๋กœ์„ธ์Šค์™€ ๊ด€๋ จ๋œ ์šฐ์„  ์ˆœ์œ„๋ฅผ ์ฆ๊ฐ€์‹œํ‚จ๋‹ค.

๋‹ค์ค‘ ์ˆ˜์ค€ ํ”ผ๋“œ๋ฐฑ ๋Œ€๊ธฐ์—ด์€ ํŠน๋ณ„ํ•œ CPU ์˜ˆ์•ฝ์„ ๊ตฌํ˜„ํ•ฉ๋‹ˆ๋‹ค. ํ”„๋กœ์„ธ์Šค๊ฐ€ ํ• ๋‹น๋œ ์ „์ฒด ํ€€ํ…€์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ๊ฒฝ์šฐ OS๋Š” ์ด ํ”„๋กœ์„ธ์Šค๊ฐ€ ์ผ๋ถ€ I/O ๋””๋ฐ”์ด์Šค๋ฅผ ๊ธฐ๋‹ค๋ฆฌ๊ณ  ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋‚˜์ค‘์— CPU๋ฅผ ์ตœ๋Œ€ํ•œ ๋นจ๋ฆฌ ๊ฐ€์ ธ์˜ฌ ์ˆ˜ ์žˆ๋„๋ก ์šฐ์„  ์ˆœ์œ„๊ฐ€ ๋†’์€ ๋Œ€๊ธฐ์—ด์— ๋ณด๊ด€ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค. ๋ฐ˜๋ฉด์—, ํ”„๋กœ์„ธ์Šค๊ฐ€ ํ• ๋‹น๋œ ์ „์ฒด ์‹œ๊ฐ„ ํ€€ํ…€์„ ์‚ฌ์šฉํ•˜๋Š” ๊ฒฝ์šฐ, OS๋Š” ์ด ํ”„๋กœ์„ธ์Šค๊ฐ€ ์‹œ๊ฐ„์ด ๋งŽ์ด ๊ฑธ๋ฆฌ๋Š” ํ”„๋กœ์„ธ์Šค๋ผ๊ณ  ์ƒ๊ฐํ•˜์—ฌ ์šฐ์„  ์ˆœ์œ„๊ฐ€ ๋‚ฎ์€ ๋Œ€๊ธฐ์—ด๋กœ ์ด๋™ํ•ฉ๋‹ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ์‹œ์Šคํ…œ์€ ํ•ญ์ƒ ์ƒˆ๋กœ์šด ํ”„๋กœ์„ธ์Šค๋ฅผ ์„ ํ˜ธํ•˜๊ณ  ๋จผ์ € ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค.
๋”ฐ๋ผ์„œ CPU ์‹œ๊ฐ„์„ ์ตœ๋Œ€ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ์ž ํ”„๋กœ์„ธ์Šค๋Š” ๋†’์€ ์šฐ์„  ์ˆœ์œ„ ๋Œ€๊ธฐ์—ด์— ๋จธ๋ฌด๋ฅด๊ณ  1ํšŒ ์–‘์ž ๋‚ด์— ์™„๋ฃŒ๋˜์ง€ ์•Š์„ ๋•Œ ๋‚ฎ์€ ์šฐ์„  ์ˆœ์œ„ ๋Œ€๊ธฐ์—ด์— ํ‘ธ์‹œ๋˜์ง€ ์•Š๊ธฐ๋ฅผ ์›ํ•ฉ๋‹ˆ๋‹ค. I/O ๊ฒฝ๊ณ„ ํ”„๋กœ์„ธ์Šค๋กœ ๊ฐ€์žฅํ•˜๋Š” ๊ฒƒ์ด ์ „๋žต์ž…๋‹ˆ๋‹ค. ์ฒซ ๋ฒˆ์งธ ๋‹จ๋ฝ์—์„œ ์„ค๋ช…ํ•œ ๋ฉ”์ปค๋‹ˆ์ฆ˜์— ๋”ฐ๋ฅด๋ฉด, ํ”„๋กœ์„ธ์Šค๊ฐ€ ์‹œ๊ฐ„ ํ€€ํ…€์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๊ณ  CPU์— ๋‚จ์•„ ์žˆ์œผ๋ฉด OS๋Š” I/O ๊ฒฝ๊ณ„ ํ”„๋กœ์„ธ์Šค๋กœ ๊ฐ„์ฃผํ•˜์—ฌ ๋†’์€ ์šฐ์„  ์ˆœ์œ„ ๋Œ€๊ธฐ์—ด์— ๋ณด๊ด€ํ•ฉ๋‹ˆ๋‹ค.

Consider a preemptive priority scheduling algorithm based on dynamically changing priorities. Larger priority numbers imply higher priority. When a process is waiting for the CPU (in the ready queue, but not running), its priority changes at a rate a; when it is running, its priority changes at a rate b. All processes are given a priority of 0 when they enter the ready queue. The parameters a and b can be set to give many different scheduling algorithms.

์ •๋ฆฌํ•˜์ž๋ฉด running ์ƒํƒœ์—์„œ๋Š” B์˜ rate๋กœ ๋ณ€ํ•˜๋Š” ์šฐ์„ ์ˆœ์œ„, ready ์ƒํƒœ์—์„œ๋Š” A์˜ rate๋กœ ๋ณ€ํ•œ๋‹ค.
B>A>0์ธ ๊ฒฝ์šฐ : running ์ƒํƒœ์— ์žˆ๋Š” ํ”„๋กœ์„ธ์Šค์˜ ์šฐ์„ ์ˆœ์œ„๊ฐ€ ๊ฐ€์žฅ ๋†’์•„์ง„๋‹ค๋Š” ๋ง์€ ์ฆ‰ ๋จผ์ € ๋“ค์–ด์˜จ๊ฒŒ ๋จผ์ € ๋‚˜๊ฐ„๋‹ค๋Š” ์˜๋ฏธ์ด๋‹ค. FIFO(FCFS)
A<B<0์ธ ๊ฒฝ์šฐ : ๋งˆ์ง€๋ง‰์œผ๋กœ ready ํ๋กœ ์ง„์ž…ํ•œ ํ”„๋กœ์„ธ์Šค์˜ ์šฐ์„ ์ˆœ์œ„๊ฐ€ ๊ฐ€์žฅ ๋†’๋‹ค. LIFO

Explain the differences in the degree to which the following scheduling algorithms discriminate in favor of short processes:

*discriminate in favor of (~๋ฅผ ์šฐ๋Œ€ํ•˜๋‹ค)

a. FCFS
b. RR
c. Multilevel feedback queues

a. ready queue์— ๋“ค์–ด์˜จ ์ˆœ์„œ๋Œ€๋กœ ์„œ๋น„์Šค๋ฅผ ํ•˜๋Š” ๋ฐฉ์‹์ด๋‹ค. ์งง์€ ์ž‘์—…์„ ์šฐ๋Œ€ํ•˜๊ฑฐ๋‚˜ ๋จผ์ € ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š๋Š”๋‹ค.
b. time slice ๋‹จ์œ„๋กœ ํ”„๋กœ์„ธ์Šค๋“ค์—๊ฒŒ ๊ณต์ •ํ•˜๊ฒŒ CPU์ž์›์„ ํ• ๋‹นํ•œ๋‹ค. ์งง์€ ์ž‘์—…์„ ์šฐ๋Œ€ํ•˜๊ฑฐ๋‚˜ ๋จผ์ € ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š์ง€๋งŒ, ํƒ€์ž„ ์Šฌ๋ผ์ด์Šค ์•ˆ์—์„œ ๋™๋“ฑํ•˜๊ฒŒ ์ฒ˜๋ฆฌ๋˜๊ธฐ ๋•Œ๋ฌธ์— ์งง์€ ์ž‘์—…์ด FCFS๋ณด๋‹ค ์ƒ๋Œ€์ ์œผ๋กœ ๋น ๋ฅด๊ฒŒ ๋๋‚ธ๋‹ค.
c. ์—ฌ๋Ÿฌ ๋‹จ๊ณ„์˜ ํ๋ฅผ ๋‘๊ณ , ์ฒซ๋ฒˆ์งธ ํ์—์„œ ๋๋‚˜์ง€ ์•Š๋Š” ์ž‘์—…์€ ์‹œ๊ฐ„์ด ์˜ค๋ž˜ ๊ฑธ๋ฆด ๊ฒƒ์ด๋ผ๊ณ  ๊ฐ€์ •ํ•˜๋ฉฐ ๋‹ค์Œ ํ๋กœ ์ด๋™์‹œํ‚จ๋‹ค. ์งง์€ ์ž‘์—…์€ ์ตœ์ดˆ์˜ ํ์—์„œ ์ฒ˜๋ฆฌ๋˜๋Š” ํŽธ์ด๊ธฐ ๋•Œ๋ฌธ์— ์งง์€ ์ž‘์—…์„ ์šฐ๋Œ€ํ•œ๋‹ค.

profile
DGU CSE

1๊ฐœ์˜ ๋Œ“๊ธ€

comment-user-thumbnail
2024๋…„ 4์›” 15์ผ

ํ˜น์‹œ ์ด ํ€ด์ฆˆ ์–ด๋Š ์ฑ…์ด๋‚˜ ์ž๋ฃŒ์—์„œ ๊ฐ€์ ธ์˜ค์‹  ๊ฑด์ง€ ์—ฌ์ญค๋ด๋„ ๋ ๊นŒ์š”?

๋‹ต๊ธ€ ๋‹ฌ๊ธฐ