synchronized
StatementA synchronized
statement acquires a mutual-exclusion lock (§17.13) on behalf of the executing thread, executes a block, then releases the lock. While the executing thread owns the lock, no other thread may acquire the lock.
SynchronizedStatement:
synchronized (
Expression)
Block
The type of Expression must be a reference type, or a compile-time error occurs.
A synchronized
statement is executed by first evaluating the Expression.
If evaluation of the Expression completes abruptly for some reason, then the synchronized
statement completes abruptly for the same reason.
Otherwise, if the value of the Expression is null
, a NullPointerException
is thrown.
Otherwise, let the non-null
value of the Expression be V. The executing thread locks the lock associated with V. Then the Block is executed. If execution of the Block completes normally, then the lock is unlocked and the synchronized
statement completes normally. If execution of the Block completes abruptly for any reason, then the lock is unlocked and the synchronized
statement then completes abruptly for the same reason.
Acquiring the lock associated with an object does not of itself prevent other threads from accessing fields of the object or invoking unsynchronized methods on the object. Other threads can also use synchronized
methods or the synchronized
statement in a conventional manner to achieve mutual exclusion.
The locks acquired by synchronized
statements are the same as the locks that are acquired implicitly by synchronized
methods; see §8.4.3.5. A single thread may hold a lock more than once. The example:
class Test { public static void main(String[] args) { Test t = new Test(); synchronized(t) { synchronized(t) { System.out.println("made it!"); } } } }
prints:
made it!
This example would deadlock if a single thread were not permitted to lock a lock more than once.
A synchronized
method acquires a lock (§17.1) before it executes. For a class (static)
method, the lock associated with the Class
object (§20.3) for the method's class is used. For an instance method, the lock associated with this
(the object for which the method was invoked) is used. These are the same locks that can be used by the synchronized
statement; thus, the code:
class Test { int count; synchronized void bump() { count++; } static int classCount; static synchronized void classBump() { classCount++; } }
has exactly the same effect as:
class BumpTest { int count; void bump() { synchronized (this) { count++; } } static int classCount; static void classBump() { try { synchronized (Class.forName("BumpTest")) { classCount++; } } catch (ClassNotFoundException e) { ... } } }
The more elaborate example:
public class Box {
private Object boxContents;
public synchronized Object get() { Object contents = boxContents; boxContents = null; return contents; }
public synchronized boolean put(Object contents) { if (boxContents != null) return false; boxContents = contents; return true; }
}
defines a class which is designed for concurrent use. Each instance of the class Box
has an instance variable contents
that can hold a reference to any object. You can put an object in a Box
by invoking put
, which returns false
if the box is already full. You can get something out of a Box
by invoking get
, which returns a null reference if the box
is empty.
If put
and get
were not synchronized
, and two threads were executing methods for the same instance of Box
at the same time, then the code could misbehave. It might, for example, lose track of an object because two invocations to put
occurred at the same time.
See §17 for more discussion of threads and locks.
While most of the discussion in the preceding chapters is concerned only with the behavior of Java code as executed a single statement or expression at a time, that is, by a single thread, each Java Virtual Machine can support many threads of execution at once. These threads independently execute Java code that operates on Java values and objects residing in a shared main memory. Threads may be supported by having many hardware processors, by time-slicing a single hardware processor, or by time-slicing many hardware processors.
Java supports the coding of programs that, though concurrent, still exhibit deterministic behavior, by providing mechanisms for synchronizing the concurrent activity of threads. To synchronize threads, Java uses monitors, which are a high-level mechanism for allowing only one thread at a time to execute a region of code protected by the monitor. The behavior of monitors is explained in terms of locks; there is a lock associated with each object.
The synchronized
statement (§14.17) performs two special actions relevant only to multithreaded operation: (1) after computing a reference to an object but before executing its body, it locks a lock associated with the object, and (2) after execution of the body has completed, either normally or abruptly, it unlocks that same lock. As a convenience, a method may be declared synchronized
; such a method behaves as if its body were contained in a synchronized
statement.
The methods wait
(§20.1.6, §20.1.7, §20.1.8), notify
(§20.1.9), and notifyAll
(§20.1.10) of class Object
support an efficient transfer of control from one thread to another. Rather than simply "spinning" (repeatedly locking and unlocking an object to see whether some internal state has changed), which consumes computational effort, a thread can suspend itself using wait
until such time as another thread awakens it using notify
. This is especially appropriate in situations where threads have a producer-consumer relationship (actively cooperating on a common goal) rather than a mutual exclusion relationship (trying to avoid conflicts while sharing a common resource).
As a thread executes code, it carries out a sequence of actions. A thread may use the value of a variable or assign it a new value. (Other actions include arithmetic operations, conditional tests, and method invocations, but these do not involves variables directly.) If two or more concurrent threads act on a shared variable, there is a possibility that the actions on the variable will produce timing-dependent results. This dependence on timing is inherent in concurrent programming, producing one of the few places in Java where the result of executing a program is not determined solely by this specification.
Each thread has a working memory, in which it may keep copies of the values of variables from the main memory that is shared between all threads. To access a shared variable, a thread usually first obtains a lock and flushes its working memory. This guarantees that shared values will be thereafter be loaded from the shared main memory to the threads working memory. When a thread unlocks a lock it guarantees the values it holds in its working memory will be written back to the main memory.
This chapter explains the interaction of threads with the main memory, and thus with each other, in terms of certain low-level actions. There are rules about the order in which these actions may occur. These rules impose constraints on any implementation of Java, and a Java programmer may rely on the rules to predict the possible behaviors of a concurrent Java program. The rules do, however, intentionally give the implementor certain freedoms; the intent is to permit certain standard hardware and software techniques that can greatly improve the speed and efficiency of concurrent code.
Briefly put, these are the important consequences of the rules:
long
and double
values; see §17.4.) A variable is any location within a Java program that may be stored into. This includes not only class variables and instance variables but also components of arrays. Variables are kept in a main memory that is shared by all threads. Because it is impossible for one thread to access parameters or local variables of another thread, it doesn't matter whether parameters and local variables are thought of as residing in the shared main memory or in the working memory of the thread that owns them.
Every thread has a working memory in which it keeps its own working copy of variables that it must use or assign. As the thread executes a Java program, it operates on these working copies. The main memory contains the master copy of every variable. There are rules about when a thread is permitted or required to transfer the contents of its working copy of a variable into the master copy or vice versa
The main memory also contains locks; there is one lock associated with each object. Threads may compete to acquire a lock.
For the purposes of this chapter, the verbs use, assign, load, store, lock, and unlock name actions that a thread can perform. The verbs read, write, lock, and unlock name actions that the main memory subsystem can perform. Each of these actions is atomic (indivisible).
A use or assign action is a tightly coupled interaction between a thread's execution engine and the thread's working memory. A lock or unlock action is a tightly coupled interaction between a thread's execution engine and the main memory. But the transfer of data between the main memory and a thread's working memory is loosely coupled. When data is copied from the main memory to a working memory, two actions must occur: a read action performed by the main memory followed some time later by a corresponding load action performed by the working memory. When data is copied from a working memory to the main memory, two actions must occur: a store action performed by the working memory followed some time later by a corresponding write action performed by the main memory. There may be some transit time between main memory and a working memory, and the transit time may be different for each transaction; thus actions initiated by a thread on different variables may viewed by another thread as occurring in a different order. For each variable, however, the actions in main memory on behalf of any one thread are performed in the same order as the corresponding actions by that thread. (This is explained in greater detail below.)
A single Java thread issues a stream of use, assign, lock, and unlock actions as dictated by the semantics of the Java program it is executing. The underlying Java implementation is then required additionally to perform appropriate load, store, read, and write actions so as to obey a certain set of constraints, explained below. If the Java implementation correctly follows these rules and the Java application programmer follows certain other rules of programming, then data can be reliably transferred between threads through shared variables. The rules are designed to be "tight" enough to make this possible but "loose" enough to allow hardware and software designers considerable freedom to improve speed and throughput through such mechanisms as registers, queues, and caches.
Here are the detailed definitions of each of the actions:
Thus the interaction of a thread with a variable over time consists of a sequence of use, assign, load, and store actions. Main memory performs a read action for every load and a write action for every store. A thread's interactions with a lock over time consists of a sequence of lock and unlock actions. All the globally visible behavior of a thread thus comprises all the thread's actions on variables and locks.