package immutable
- Source
- package.scala
- Alphabetic
- By Inheritance
- immutable
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
-
class
ParHashMap[K, +V] extends ParMap[K, V] with GenericParMapTemplate[K, V, ParHashMap] with ParMapLike[K, V, ParHashMap[K, V], HashMap[K, V]] with Serializable
Immutable parallel hash map, based on hash tries.
Immutable parallel hash map, based on hash tries.
This is a base trait for Scala parallel collections. It defines behaviour common to all parallel collections. Concrete parallel collections should inherit this trait and
ParIterableif they want to define specific combiner factories.Parallel operations are implemented with divide and conquer style algorithms that parallelize well. The basic idea is to split the collection into smaller parts until they are small enough to be operated on sequentially.
All of the parallel operations are implemented as tasks within this trait. Tasks rely on the concept of splitters, which extend iterators. Every parallel collection defines:
def splitter: IterableSplitter[T]which returns an instance of
IterableSplitter[T], which is a subtype ofSplitter[T]. Splitters have a methodremainingto check the remaining number of elements, and methodsplitwhich is defined by splitters. Methodsplitdivides the splitters iterate over into disjunct subsets:def split: Seq[Splitter]
which splits the splitter into a sequence of disjunct subsplitters. This is typically a very fast operation which simply creates wrappers around the receiver collection. This can be repeated recursively.
Tasks are scheduled for execution through a scala.collection.parallel.TaskSupport object, which can be changed through the
tasksupportsetter of the collection.Method
newCombinerproduces a new combiner. Combiners are an extension of builders. They provide a methodcombinewhich combines two combiners and returns a combiner containing elements of both combiners. This method can be implemented by aggressively copying all the elements into the new combiner or by lazily binding their results. It is recommended to avoid copying all of the elements for performance reasons, although that cost might be negligible depending on the use case. Standard parallel collection combiners avoid copying when merging results, relying either on a two-step lazy construction or specific data-structure properties.Methods:
def seq: Sequential def par: Repr
produce the sequential or parallel implementation of the collection, respectively. Method
parjust returns a reference to this parallel collection. Methodseqis efficient - it will not copy the elements. Instead, it will create a sequential version of the collection using the same underlying data structure. Note that this is not the case for sequential collections in general - they may copy the elements and produce a different underlying data structure.The combination of methods
toMap,toSeqortoSetalong withparandseqis a flexible way to change between different collection types.Since this trait extends the
GenIterabletrait, methods likesizemust also be implemented in concrete collections, whileiteratorforwards tosplitterby default.Each parallel collection is bound to a specific fork/join pool, on which dormant worker threads are kept. The fork/join pool contains other information such as the parallelism level, that is, the number of processors used. When a collection is created, it is assigned the default fork/join pool found in the
scala.parallelpackage object.Parallel collections are not necessarily ordered in terms of the
foreachoperation (seeTraversable). Parallel sequences have a well defined order for iterators - creating an iterator and traversing the elements linearly will always yield the same order. However, bulk operations such asforeach,maporfilteralways occur in undefined orders for all parallel collections.Existing parallel collection implementations provide strict parallel iterators. Strict parallel iterators are aware of the number of elements they have yet to traverse. It's also possible to provide non-strict parallel iterators, which do not know the number of elements remaining. To do this, the new collection implementation must override
isStrictSplitterCollectiontofalse. This will make some operations unavailable.To create a new parallel collection, extend the
ParIterabletrait, and implementsize,splitter,newCombinerandseq. Having an implicit combiner factory requires extending this trait in addition, as well as providing a companion object, as with regular collections.Method
sizeis implemented as a constant time operation for parallel collections, and parallel collection operations rely on this assumption.The higher-order functions passed to certain operations may contain side-effects. Since implementations of bulk operations may not be sequential, this means that side-effects may not be predictable and may produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer to either avoid using side-effects or to use some form of synchronization when accessing mutable data.
- K
the key type of the map
- V
the value type of the map
- Annotations
- @SerialVersionUID()
- Since
2.9
- See also
Scala's Parallel Collections Library overview section on Parallel Hash Tries for more information.
-
class
ParHashSet[T] extends ParSet[T] with GenericParTemplate[T, ParHashSet] with ParSetLike[T, ParHashSet[T], HashSet[T]] with Serializable
Immutable parallel hash set, based on hash tries.
Immutable parallel hash set, based on hash tries.
This is a base trait for Scala parallel collections. It defines behaviour common to all parallel collections. Concrete parallel collections should inherit this trait and
ParIterableif they want to define specific combiner factories.Parallel operations are implemented with divide and conquer style algorithms that parallelize well. The basic idea is to split the collection into smaller parts until they are small enough to be operated on sequentially.
All of the parallel operations are implemented as tasks within this trait. Tasks rely on the concept of splitters, which extend iterators. Every parallel collection defines:
def splitter: IterableSplitter[T]which returns an instance of
IterableSplitter[T], which is a subtype ofSplitter[T]. Splitters have a methodremainingto check the remaining number of elements, and methodsplitwhich is defined by splitters. Methodsplitdivides the splitters iterate over into disjunct subsets:def split: Seq[Splitter]
which splits the splitter into a sequence of disjunct subsplitters. This is typically a very fast operation which simply creates wrappers around the receiver collection. This can be repeated recursively.
Tasks are scheduled for execution through a scala.collection.parallel.TaskSupport object, which can be changed through the
tasksupportsetter of the collection.Method
newCombinerproduces a new combiner. Combiners are an extension of builders. They provide a methodcombinewhich combines two combiners and returns a combiner containing elements of both combiners. This method can be implemented by aggressively copying all the elements into the new combiner or by lazily binding their results. It is recommended to avoid copying all of the elements for performance reasons, although that cost might be negligible depending on the use case. Standard parallel collection combiners avoid copying when merging results, relying either on a two-step lazy construction or specific data-structure properties.Methods:
def seq: Sequential def par: Repr
produce the sequential or parallel implementation of the collection, respectively. Method
parjust returns a reference to this parallel collection. Methodseqis efficient - it will not copy the elements. Instead, it will create a sequential version of the collection using the same underlying data structure. Note that this is not the case for sequential collections in general - they may copy the elements and produce a different underlying data structure.The combination of methods
toMap,toSeqortoSetalong withparandseqis a flexible way to change between different collection types.Since this trait extends the
GenIterabletrait, methods likesizemust also be implemented in concrete collections, whileiteratorforwards tosplitterby default.Each parallel collection is bound to a specific fork/join pool, on which dormant worker threads are kept. The fork/join pool contains other information such as the parallelism level, that is, the number of processors used. When a collection is created, it is assigned the default fork/join pool found in the
scala.parallelpackage object.Parallel collections are not necessarily ordered in terms of the
foreachoperation (seeTraversable). Parallel sequences have a well defined order for iterators - creating an iterator and traversing the elements linearly will always yield the same order. However, bulk operations such asforeach,maporfilteralways occur in undefined orders for all parallel collections.Existing parallel collection implementations provide strict parallel iterators. Strict parallel iterators are aware of the number of elements they have yet to traverse. It's also possible to provide non-strict parallel iterators, which do not know the number of elements remaining. To do this, the new collection implementation must override
isStrictSplitterCollectiontofalse. This will make some operations unavailable.To create a new parallel collection, extend the
ParIterabletrait, and implementsize,splitter,newCombinerandseq. Having an implicit combiner factory requires extending this trait in addition, as well as providing a companion object, as with regular collections.Method
sizeis implemented as a constant time operation for parallel collections, and parallel collection operations rely on this assumption.The higher-order functions passed to certain operations may contain side-effects. Since implementations of bulk operations may not be sequential, this means that side-effects may not be predictable and may produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer to either avoid using side-effects or to use some form of synchronization when accessing mutable data.
- T
the element type of the set
- Annotations
- @SerialVersionUID()
- Since
2.9
- See also
Scala's Parallel Collections Library overview section on Parallel Hash Tries for more information.
-
trait
ParIterable[+T] extends GenIterable[T] with parallel.ParIterable[T] with GenericParTemplate[T, ParIterable] with ParIterableLike[T, ParIterable[T], immutable.Iterable[T]] with Immutable
A template trait for immutable parallel iterable collections.
A template trait for immutable parallel iterable collections.
This is a base trait for Scala parallel collections. It defines behaviour common to all parallel collections. Concrete parallel collections should inherit this trait and
ParIterableif they want to define specific combiner factories.Parallel operations are implemented with divide and conquer style algorithms that parallelize well. The basic idea is to split the collection into smaller parts until they are small enough to be operated on sequentially.
All of the parallel operations are implemented as tasks within this trait. Tasks rely on the concept of splitters, which extend iterators. Every parallel collection defines:
def splitter: IterableSplitter[T]which returns an instance of
IterableSplitter[T], which is a subtype ofSplitter[T]. Splitters have a methodremainingto check the remaining number of elements, and methodsplitwhich is defined by splitters. Methodsplitdivides the splitters iterate over into disjunct subsets:def split: Seq[Splitter]
which splits the splitter into a sequence of disjunct subsplitters. This is typically a very fast operation which simply creates wrappers around the receiver collection. This can be repeated recursively.
Tasks are scheduled for execution through a scala.collection.parallel.TaskSupport object, which can be changed through the
tasksupportsetter of the collection.Method
newCombinerproduces a new combiner. Combiners are an extension of builders. They provide a methodcombinewhich combines two combiners and returns a combiner containing elements of both combiners. This method can be implemented by aggressively copying all the elements into the new combiner or by lazily binding their results. It is recommended to avoid copying all of the elements for performance reasons, although that cost might be negligible depending on the use case. Standard parallel collection combiners avoid copying when merging results, relying either on a two-step lazy construction or specific data-structure properties.Methods:
def seq: Sequential def par: Repr
produce the sequential or parallel implementation of the collection, respectively. Method
parjust returns a reference to this parallel collection. Methodseqis efficient - it will not copy the elements. Instead, it will create a sequential version of the collection using the same underlying data structure. Note that this is not the case for sequential collections in general - they may copy the elements and produce a different underlying data structure.The combination of methods
toMap,toSeqortoSetalong withparandseqis a flexible way to change between different collection types.Since this trait extends the
GenIterabletrait, methods likesizemust also be implemented in concrete collections, whileiteratorforwards tosplitterby default.Each parallel collection is bound to a specific fork/join pool, on which dormant worker threads are kept. The fork/join pool contains other information such as the parallelism level, that is, the number of processors used. When a collection is created, it is assigned the default fork/join pool found in the
scala.parallelpackage object.Parallel collections are not necessarily ordered in terms of the
foreachoperation (seeTraversable). Parallel sequences have a well defined order for iterators - creating an iterator and traversing the elements linearly will always yield the same order. However, bulk operations such asforeach,maporfilteralways occur in undefined orders for all parallel collections.Existing parallel collection implementations provide strict parallel iterators. Strict parallel iterators are aware of the number of elements they have yet to traverse. It's also possible to provide non-strict parallel iterators, which do not know the number of elements remaining. To do this, the new collection implementation must override
isStrictSplitterCollectiontofalse. This will make some operations unavailable.To create a new parallel collection, extend the
ParIterabletrait, and implementsize,splitter,newCombinerandseq. Having an implicit combiner factory requires extending this trait in addition, as well as providing a companion object, as with regular collections.Method
sizeis implemented as a constant time operation for parallel collections, and parallel collection operations rely on this assumption.The higher-order functions passed to certain operations may contain side-effects. Since implementations of bulk operations may not be sequential, this means that side-effects may not be predictable and may produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer to either avoid using side-effects or to use some form of synchronization when accessing mutable data.
- T
the element type of the collection
- Since
2.9
-
trait
ParMap[K, +V] extends GenMap[K, V] with GenericParMapTemplate[K, V, ParMap] with parallel.ParMap[K, V] with ParIterable[(K, V)] with ParMapLike[K, V, ParMap[K, V], immutable.Map[K, V]]
A template trait for immutable parallel maps.
A template trait for immutable parallel maps.
The higher-order functions passed to certain operations may contain side-effects. Since implementations of bulk operations may not be sequential, this means that side-effects may not be predictable and may produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer to either avoid using side-effects or to use some form of synchronization when accessing mutable data.
- K
the key type of the map
- V
the value type of the map
- Since
2.9
-
class
ParRange extends ParSeq[Int] with Serializable
Parallel ranges.
Parallel ranges.
This is a base trait for Scala parallel collections. It defines behaviour common to all parallel collections. Concrete parallel collections should inherit this trait and
ParIterableif they want to define specific combiner factories.Parallel operations are implemented with divide and conquer style algorithms that parallelize well. The basic idea is to split the collection into smaller parts until they are small enough to be operated on sequentially.
All of the parallel operations are implemented as tasks within this trait. Tasks rely on the concept of splitters, which extend iterators. Every parallel collection defines:
def splitter: IterableSplitter[T]which returns an instance of
IterableSplitter[T], which is a subtype ofSplitter[T]. Splitters have a methodremainingto check the remaining number of elements, and methodsplitwhich is defined by splitters. Methodsplitdivides the splitters iterate over into disjunct subsets:def split: Seq[Splitter]
which splits the splitter into a sequence of disjunct subsplitters. This is typically a very fast operation which simply creates wrappers around the receiver collection. This can be repeated recursively.
Tasks are scheduled for execution through a scala.collection.parallel.TaskSupport object, which can be changed through the
tasksupportsetter of the collection.Method
newCombinerproduces a new combiner. Combiners are an extension of builders. They provide a methodcombinewhich combines two combiners and returns a combiner containing elements of both combiners. This method can be implemented by aggressively copying all the elements into the new combiner or by lazily binding their results. It is recommended to avoid copying all of the elements for performance reasons, although that cost might be negligible depending on the use case. Standard parallel collection combiners avoid copying when merging results, relying either on a two-step lazy construction or specific data-structure properties.Methods:
def seq: Sequential def par: Repr
produce the sequential or parallel implementation of the collection, respectively. Method
parjust returns a reference to this parallel collection. Methodseqis efficient - it will not copy the elements. Instead, it will create a sequential version of the collection using the same underlying data structure. Note that this is not the case for sequential collections in general - they may copy the elements and produce a different underlying data structure.The combination of methods
toMap,toSeqortoSetalong withparandseqis a flexible way to change between different collection types.Since this trait extends the
GenIterabletrait, methods likesizemust also be implemented in concrete collections, whileiteratorforwards tosplitterby default.Each parallel collection is bound to a specific fork/join pool, on which dormant worker threads are kept. The fork/join pool contains other information such as the parallelism level, that is, the number of processors used. When a collection is created, it is assigned the default fork/join pool found in the
scala.parallelpackage object.Parallel collections are not necessarily ordered in terms of the
foreachoperation (seeTraversable). Parallel sequences have a well defined order for iterators - creating an iterator and traversing the elements linearly will always yield the same order. However, bulk operations such asforeach,maporfilteralways occur in undefined orders for all parallel collections.Existing parallel collection implementations provide strict parallel iterators. Strict parallel iterators are aware of the number of elements they have yet to traverse. It's also possible to provide non-strict parallel iterators, which do not know the number of elements remaining. To do this, the new collection implementation must override
isStrictSplitterCollectiontofalse. This will make some operations unavailable.To create a new parallel collection, extend the
ParIterabletrait, and implementsize,splitter,newCombinerandseq. Having an implicit combiner factory requires extending this trait in addition, as well as providing a companion object, as with regular collections.Method
sizeis implemented as a constant time operation for parallel collections, and parallel collection operations rely on this assumption.The higher-order functions passed to certain operations may contain side-effects. Since implementations of bulk operations may not be sequential, this means that side-effects may not be predictable and may produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer to either avoid using side-effects or to use some form of synchronization when accessing mutable data.
- Annotations
- @SerialVersionUID()
- Since
2.9
- See also
Scala's Parallel Collections Library overview section on
ParRangefor more information.
-
trait
ParSeq[+T] extends GenSeq[T] with parallel.ParSeq[T] with ParIterable[T] with GenericParTemplate[T, ParSeq] with ParSeqLike[T, ParSeq[T], immutable.Seq[T]]
An immutable variant of
ParSeq. -
trait
ParSet[T] extends GenSet[T] with GenericParTemplate[T, ParSet] with parallel.ParSet[T] with ParIterable[T] with ParSetLike[T, ParSet[T], immutable.Set[T]]
An immutable variant of
ParSet. -
class
ParVector[+T] extends ParSeq[T] with GenericParTemplate[T, ParVector] with ParSeqLike[T, ParVector[T], immutable.Vector[T]] with Serializable
Immutable parallel vectors, based on vectors.
Immutable parallel vectors, based on vectors.
This is a base trait for Scala parallel collections. It defines behaviour common to all parallel collections. Concrete parallel collections should inherit this trait and
ParIterableif they want to define specific combiner factories.Parallel operations are implemented with divide and conquer style algorithms that parallelize well. The basic idea is to split the collection into smaller parts until they are small enough to be operated on sequentially.
All of the parallel operations are implemented as tasks within this trait. Tasks rely on the concept of splitters, which extend iterators. Every parallel collection defines:
def splitter: IterableSplitter[T]which returns an instance of
IterableSplitter[T], which is a subtype ofSplitter[T]. Splitters have a methodremainingto check the remaining number of elements, and methodsplitwhich is defined by splitters. Methodsplitdivides the splitters iterate over into disjunct subsets:def split: Seq[Splitter]
which splits the splitter into a sequence of disjunct subsplitters. This is typically a very fast operation which simply creates wrappers around the receiver collection. This can be repeated recursively.
Tasks are scheduled for execution through a scala.collection.parallel.TaskSupport object, which can be changed through the
tasksupportsetter of the collection.Method
newCombinerproduces a new combiner. Combiners are an extension of builders. They provide a methodcombinewhich combines two combiners and returns a combiner containing elements of both combiners. This method can be implemented by aggressively copying all the elements into the new combiner or by lazily binding their results. It is recommended to avoid copying all of the elements for performance reasons, although that cost might be negligible depending on the use case. Standard parallel collection combiners avoid copying when merging results, relying either on a two-step lazy construction or specific data-structure properties.Methods:
def seq: Sequential def par: Repr
produce the sequential or parallel implementation of the collection, respectively. Method
parjust returns a reference to this parallel collection. Methodseqis efficient - it will not copy the elements. Instead, it will create a sequential version of the collection using the same underlying data structure. Note that this is not the case for sequential collections in general - they may copy the elements and produce a different underlying data structure.The combination of methods
toMap,toSeqortoSetalong withparandseqis a flexible way to change between different collection types.Since this trait extends the
GenIterabletrait, methods likesizemust also be implemented in concrete collections, whileiteratorforwards tosplitterby default.Each parallel collection is bound to a specific fork/join pool, on which dormant worker threads are kept. The fork/join pool contains other information such as the parallelism level, that is, the number of processors used. When a collection is created, it is assigned the default fork/join pool found in the
scala.parallelpackage object.Parallel collections are not necessarily ordered in terms of the
foreachoperation (seeTraversable). Parallel sequences have a well defined order for iterators - creating an iterator and traversing the elements linearly will always yield the same order. However, bulk operations such asforeach,maporfilteralways occur in undefined orders for all parallel collections.Existing parallel collection implementations provide strict parallel iterators. Strict parallel iterators are aware of the number of elements they have yet to traverse. It's also possible to provide non-strict parallel iterators, which do not know the number of elements remaining. To do this, the new collection implementation must override
isStrictSplitterCollectiontofalse. This will make some operations unavailable.To create a new parallel collection, extend the
ParIterabletrait, and implementsize,splitter,newCombinerandseq. Having an implicit combiner factory requires extending this trait in addition, as well as providing a companion object, as with regular collections.Method
sizeis implemented as a constant time operation for parallel collections, and parallel collection operations rely on this assumption.The higher-order functions passed to certain operations may contain side-effects. Since implementations of bulk operations may not be sequential, this means that side-effects may not be predictable and may produce data-races, deadlocks or invalidation of state if care is not taken. It is up to the programmer to either avoid using side-effects or to use some form of synchronization when accessing mutable data.
- T
the element type of the vector
- Since
2.9
- See also
Scala's Parallel Collections Library overview section on
ParVectorfor more information.
Value Members
- def repetition[T](elem: T, len: Int): Repetition[T]
- object HashSetCombiner
-
object
ParHashMap extends ParMapFactory[ParHashMap] with Serializable
This object provides a set of operations needed to create
values.immutable.ParHashMap -
object
ParHashSet extends ParSetFactory[ParHashSet] with Serializable
This object provides a set of operations needed to create
values.immutable.ParHashSet -
object
ParIterable extends ParFactory[ParIterable]
This object provides a set of operations to create
values.ParIterable - object ParMap extends ParMapFactory[ParMap]
- object ParRange extends Serializable
-
object
ParSeq extends ParFactory[ParSeq]
This object provides a set of operations to create
values.mutable.ParSeq -
object
ParSet extends ParSetFactory[ParSet]
This object provides a set of operations needed to create
values.mutable.ParSet -
object
ParVector extends ParFactory[ParVector] with Serializable
This object provides a set of operations to create
values.immutable.ParVector
This is the documentation for the Scala standard library.
Package structure
The scala package contains core types like
Int,Float,ArrayorOptionwhich are accessible in all Scala compilation units without explicit qualification or imports.Notable packages include:
scala.collectionand its sub-packages contain Scala's collections frameworkscala.collection.immutable- Immutable, sequential data-structures such asVector,List,Range,HashMaporHashSetscala.collection.mutable- Mutable, sequential data-structures such asArrayBuffer,StringBuilder,HashMaporHashSetscala.collection.concurrent- Mutable, concurrent data-structures such asTrieMapscala.collection.parallel.immutable- Immutable, parallel data-structures such asParVector,ParRange,ParHashMaporParHashSetscala.collection.parallel.mutable- Mutable, parallel data-structures such asParArray,ParHashMap,ParTrieMaporParHashSetscala.concurrent- Primitives for concurrent programming such asFuturesandPromisesscala.io- Input and output operationsscala.math- Basic math functions and additional numeric types likeBigIntandBigDecimalscala.sys- Interaction with other processes and the operating systemscala.util.matching- Regular expressionsOther packages exist. See the complete list on the right.
Additional parts of the standard library are shipped as separate libraries. These include:
scala.reflect- Scala's reflection API (scala-reflect.jar)scala.xml- XML parsing, manipulation, and serialization (scala-xml.jar)scala.swing- A convenient wrapper around Java's GUI framework called Swing (scala-swing.jar)scala.util.parsing- Parser combinators (scala-parser-combinators.jar)Automatic imports
Identifiers in the scala package and the
scala.Predefobject are always in scope by default.Some of these identifiers are type aliases provided as shortcuts to commonly used classes. For example,
Listis an alias forscala.collection.immutable.List.Other aliases refer to classes provided by the underlying platform. For example, on the JVM,
Stringis an alias forjava.lang.String.