Mastering the GMP Scheduler and Goroutines
Go uses a user-space scheduler known as the GMP model to manage goroutines efficiently.
Key Concept: Work Stealing. If a P runs out of Gs, it tries to steal half of the Gs from another P's run queue.
Implement a worker pool that processes a stream of jobs concurrently using a fixed number of workers.
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
defer wg.Done()
for j := range jobs {
fmt.Printf("Worker %d started job %d\n", id, j)
time.Sleep(time.Second) // Simulate work
fmt.Printf("Worker %d finished job %d\n", id, j)
results <- j * 2
}
}
func main() {
const numJobs = 5
const numWorkers = 3
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
var wg sync.WaitGroup
// Start workers
for w := 1; w <= numWorkers; w++ {
wg.Add(1)
go worker(w, jobs, results, &wg)
}
// Send jobs
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
// Wait for workers in a separate goroutine to close results
go func() {
wg.Wait()
close(results)
}()
// Collect results
for r := range results {
fmt.Println("Result:", r)
}
}
Answer: Goroutines are user-space threads managed by the Go runtime (2KB stack vs 1MB+ for OS threads). They have faster startup/teardown and lower context switch overhead compared to OS threads.
Answer: The M (OS thread) executing that G blocks. The P (Processor) detaches from the blocked M and acquires a new M (either from the idle pool or creates a new one) to continue executing other Gs in its run queue. This is called "Handoff".
Check your understanding: