By now, you should be comfortable with the models and tools provided in Go's core to provide mostly race-free concurrency.
We can now create goroutines and channels with ease, manage basic communication across channels, coordinate data without race conditions, and detect such conditions as they arise.
However, we can neither manage larger distributed systems nor deal with potentially lower-level consistency problems. We've utilized a basic and simplistic mutex, but we are about to look at a more complicated and expressive way of handling mutual exclusions.
By the end of this chapter, you should be able to expand your concurrency patterns from the previous chapter into distributed systems using a myriad of concurrency models and systems from other languages. We'll also look—at a high level—at some consistency models that you can utilize to further express your precoding strategies for single-source and distributed applications.
In Chapter 2, Understanding the Concurrency Model, we introduced sync.mutex
and how to invoke a mutual exclusion lock within your code, but there's some more nuance to consider with the package and the mutex type.
We've mentioned that in an ideal world, you should be able to maintain synchronization in your application by using goroutines alone. In fact, this would probably be best described as the canonical method within Go, although the sync
package does provide a few other utilities, including mutexes.
Whenever possible, we'll stick with goroutines and channels to manage consistency, but the mutex does provide a more traditional and granular approach to lock and access data. If you've ever managed another concurrent language (or package within a language), odds are you've had experience with either a mutex or a philosophical analog. In the following chapters, we'll look at ways of extending and exploiting mutexes to do a little more out of the box.
If we look at the sync
package, we'll see there are a couple of different mutex structs.
The first is sync.mutex
, which we've explored—but another is RWMutex
. The RWMutex
struct provides a multireader, single-writer lock. These can be useful if you want to allow reads to resources but provide mutex-like locks when a write is attempted. They can be best utilized when you expect a function or subprocess to do frequent reads but infrequent writes, but it still cannot afford a dirty read.
Let's look at an example that updates the date/time every 10 seconds (acquiring a lock), yet outputs the current value every other second, as shown in the following code:
package main import ( "fmt" "sync" "time" ) type TimeStruct struct { totalChanges int currentTime time.Time rwLock sync.RWMutex } var TimeElement TimeStruct func updateTime() { TimeElement.rwLock.Lock() defer TimeElement.rwLock.Unlock() TimeElement.currentTime = time.Now() TimeElement.totalChanges++ } func main() { var wg sync.WaitGroup TimeElement.totalChanges = 0 TimeElement.currentTime = time.Now() timer := time.NewTicker(1 * time.Second) writeTimer := time.NewTicker(10 * time.Second) endTimer := make(chan bool) wg.Add(1) go func() { for { select { case <-timer.C: fmt.Println(TimeElement.totalChanges, TimeElement.currentTime.String()) case <-writeTimer.C: updateTime() case <-endTimer: timer.Stop() return } } }() wg.Wait() fmt.Println(TimeElement.currentTime.String()) }
There are two different methods for performing locks/unlocks on RWMutex
:
The second method is what we've used for this example, because we want to simulate a real-world lock. The net effect is the interval
function that outputs the current time that will return a single dirty read before rwLock
releases the read lock on the currentTime
variable. The Sleep()
method exists solely to give us time to witness the lock in motion. An RWLock
struct can be acquired by many readers or by a single writer.