Manages subregions and cached data. Each region
can contain multiple subregions and entries for data.
Regions provide a hierarchical name space
within the cache. Also, a region can be used to group cached
objects for management purposes.
The Region interface basically contains two set of APIs: Region management
APIs; and (potentially) distributed operations on entries. Non-distributed
operations on entries
are provided by the inner interface,
com.gemstone.gemfire.cache.Region.Entry.
Each
com.gemstone.gemfire.cache.Cache defines a single top region called the root region.
User applications can use the root region to create subregions
for isolated name space and object grouping.
A region's name can be any String except that it should not contain
the region name separator, a forward slash (/).
Region
s can be referenced by a relative path name from any region
higher in the hierarchy in
#getSubregion. You can get the relative
path from the root region with
#getFullPath. The name separator
is used to concatenate all the region names together from the root, starting
with the root's subregions.
Relative region names can provide a convenient
method to locate a subregion directly from some higher region. For example,
a region structure is as follows:
a region named 3rd_level_region has parent region 2nd_level_region; region
2nd_level_region in turn has parent region 1st_level_region; and region
1st_level_region is a child of the root region. Then,the user can get the region
3rd_level_region from the root region by issuing:
region3 = root.getSubregion("1st_level_region/2nd_level_region/3rd_level_region");
or the user can get the region
3rd_level_region from region
1st_level_region
by issuing
region3 = region1.getSubregion("2nd_level_region/3rd_level_region");
Region entries are identified by their key. Any Object can be used as a key
as long as the key Object
is region-wide unique and implements both the equals and
hashCode methods. For regions with distributed scope, the key must also be Serializable.
Regions and their entries can be locked. The Lock
obtained from
#getRegionDistributedLock is a distributed lock on the
entire Region, and the Lock
obtained from
Region#getDistributedLock is a distributed lock on the individual
entry.
If the scope is Scope.GLOBAL
, the methods
that modify, destroy, or invalidate the entries in this region will also get a
distributed lock. See the documentations for
#getDistributedLock and
#getRegionDistributedLock for details on the implicit locking that
occurs for regions with Scope.GLOBAL
.
Unless otherwise specified, all of these methods throw a
CacheClosedException
if the Cache is closed at the time of
invocation, or a RegionDestroyedException
if this region has been
destroyed.
Serializability Requirements for arguments: Several methods in the region API
take parameters such as key, value and callback parameters.All of these parameters
are typed as objects.
For distributed regions, keys, values and callback parameters have to be serializable
Failure to meet these serialization requirements
causes API methods to throw IllegalArgumentException.
Implementation of the java.util.concurrent.ConcurrentMap interface was added
in version 6.5. These methods give various levels
of concurrency guarantees based on the scope and data policy of the region.
They are implemented in the peer cache and client/server cache but are
disallowed in peer Regions having NORMAL or EMPTY data policies.
The semantics of the ConcurrentMap methods on a Partitioned Region are
consistent with those expected on a ConcurrentMap. In particular
multiple writers in different JVMs of the same key in the same
Partitioned Region will be done atomically.
The same is true for a region with GLOBAL scope. All operations will be
done atomically since a distributed lock will be held while the
operation is done.
The same is true for a region with LOCAL scope. All ops will be done
atomically since the underlying map is a concurrent hash map and no
distribution is involved.
For peer REPLICATE and PRELOADED regions atomicity is limited to
threads in the JVM the operation starts in. There is no coordination with
other members of the system unless the operation is performed in a
transaction.
For client server regions the atomicity is determined by the scope and
data policy of the server region as described above. The operation is
actually performed on the server as described above. Clients will
always send the ConcurrentMap operation to the server and the result
returned by the ConcurrentMap method in client will reflect what was
done on the server. Same goes for any CacheListener called on the
client. Any local state on the client will be updated to be consistent
with the state change made on the server.