class Flux[T] extends Publisher[T] with MapablePublisher[T] with OnErrorReturn[T] with FluxLike[T] with Filter[T] with Scannable
A Reactive Streams Publisher with rx operators that emits 0 to N elements, and then completes (successfully or with an error).

It is intended to be used in implementations and return types. Input parameters should keep using raw Publisher as much as possible.
If it is known that the underlying Publisher will emit 0 or 1 element, Mono should be used instead.
Note that using state in the lambdas used within Flux operators should be avoided, as these may be shared between several Subscribers.
- T
the element type of this Reactive Streams Publisher
- See also
- Alphabetic
- By Inheritance
- Flux
- Scannable
- Filter
- FluxLike
- OnErrorReturn
- MapablePublisher
- Publisher
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
actuals(): Stream[_ <: Scannable]
- Definition Classes
- Scannable
-
final
def
all(predicate: (T) ⇒ Boolean): Mono[Boolean]
Emit a single boolean true if all values of this sequence match the given predicate.
Emit a single boolean true if all values of this sequence match the given predicate.
The implementation uses short-circuit logic and completes with false if the predicate doesn't match a value.
- predicate
the predicate to match all emitted items
- returns
a Mono of all evaluations
-
final
def
any(predicate: (T) ⇒ Boolean): Mono[Boolean]
Emit a single boolean true if any of the values of this Flux sequence match the predicate.
Emit a single boolean true if any of the values of this Flux sequence match the predicate.
The implementation uses short-circuit logic and completes with true if the predicate matches a value.
- predicate
predicate tested upon values
- returns
a new Flux with
trueif any value satisfies a predicate andfalseotherwise
-
final
def
as[P](transformer: (Flux[T]) ⇒ P): P
Immediately apply the given transformation to this Flux in order to generate a target type.
Immediately apply the given transformation to this Flux in order to generate a target type.
flux.as(Mono::from).subscribe()- P
the returned type
- transformer
the Function1 to immediately map this Flux into a target type instance.
- returns
a an instance of P
- See also
Flux.compose for a bounded conversion to Publisher
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
- final def asJava(): publisher.Flux[T]
-
final
def
blockFirst(d: Duration): Option[T]
Blocks until the upstream signals its first value or completes.
Blocks until the upstream signals its first value or completes.
- d
max duration timeout to wait for.
- returns
the Some value or None
-
final
def
blockFirst(): Option[T]
Blocks until the upstream signals its first value or completes.
Blocks until the upstream signals its first value or completes.
- returns
the Some value or None
-
final
def
blockLast(d: Duration): Option[T]
Blocks until the upstream completes and return the last emitted value.
Blocks until the upstream completes and return the last emitted value.
- d
max duration timeout to wait for.
- returns
the last value or None
-
final
def
blockLast(): Option[T]
Blocks until the upstream completes and return the last emitted value.
Blocks until the upstream completes and return the last emitted value.
- returns
the last value or None
-
final
def
buffer(timespan: Duration, timeshift: Duration): Flux[Seq[T]]
Collect incoming values into multiple Seq delimited by the given
timeshiftperiod.Collect incoming values into multiple Seq delimited by the given
timeshiftperiod. Each Seq bucket will last until thetimespanhas elapsed, thus releasing the bucket to the returned Flux.When timeshift > timespan : dropping buffers

When timeshift < timespan : overlapping buffers

When timeshift == timespan : exact buffers
- timespan
the duration to use to release buffered lists
- timeshift
the duration to use to create a new bucket
- returns
a microbatched Flux of Seq delimited by the given period timeshift and sized by timespan
-
final
def
buffer(timespan: Duration): Flux[Seq[T]]
Collect incoming values into multiple Seq that will be pushed into the returned Flux every timespan.
-
final
def
buffer[C <: ListBuffer[T]](other: Publisher[_], bufferSupplier: () ⇒ C): Flux[Seq[T]]
Collect incoming values into multiple Seq delimited by the given Publisher signals.
Collect incoming values into multiple Seq delimited by the given Publisher signals.
- C
the supplied Seq type
- other
the other Publisher to subscribe to for emitting and recycling receiving bucket
- bufferSupplier
the collection to use for each data segment
- returns
a microbatched Flux of Seq delimited by a Publisher
-
final
def
buffer(other: Publisher[_]): Flux[Seq[T]]
Collect incoming values into multiple Seq delimited by the given Publisher signals.
Collect incoming values into multiple Seq delimited by the given Publisher signals.
- other
the other Publisher to subscribe to for emiting and recycling receiving bucket
- returns
a microbatched Flux of Seq delimited by a Publisher
-
final
def
buffer[C <: ListBuffer[T]](maxSize: Int, skip: Int, bufferSupplier: () ⇒ C): Flux[Seq[T]]
Collect incoming values into multiple mutable.Seq that will be pushed into the returned Flux when the given max size is reached or onComplete is received.
Collect incoming values into multiple mutable.Seq that will be pushed into the returned Flux when the given max size is reached or onComplete is received. A new container mutable.Seq will be created every given skip count.
When Skip > Max Size : dropping buffers

When Skip < Max Size : overlapping buffers

When Skip == Max Size : exact buffers
- C
the supplied mutable.Seq type
- maxSize
the max collected size
- skip
the number of items to skip before creating a new bucket
- bufferSupplier
the collection to use for each data segment
- returns
a microbatched Flux of possibly overlapped or gapped mutable.Seq
-
final
def
buffer(maxSize: Int, skip: Int): Flux[Seq[T]]
Collect incoming values into multiple Seq that will be pushed into the returned Flux when the given max size is reached or onComplete is received.
Collect incoming values into multiple Seq that will be pushed into the returned Flux when the given max size is reached or onComplete is received. A new container Seq will be created every given skip count.
When Skip > Max Size : dropping buffers

When Skip < Max Size : overlapping buffers

When Skip == Max Size : exact buffers
- maxSize
the max collected size
- skip
the number of items to skip before creating a new bucket
- returns
a microbatched Flux of possibly overlapped or gapped Seq
-
final
def
buffer[C <: ListBuffer[T]](maxSize: Int, bufferSupplier: () ⇒ C): Flux[Seq[T]]
Collect incoming values into multiple Seq buckets that will be pushed into the returned Flux when the given max size is reached or onComplete is received.
Collect incoming values into multiple Seq buckets that will be pushed into the returned Flux when the given max size is reached or onComplete is received.
- C
the supplied Seq type
- maxSize
the maximum collected size
- bufferSupplier
the collection to use for each data segment
- returns
a microbatched Flux of Seq
-
final
def
buffer(maxSize: Int): Flux[Seq[T]]
Collect incoming values into multiple Seq buckets that will be pushed into the returned Flux when the given max size is reached or onComplete is received.
-
final
def
buffer(): Flux[Seq[T]]
Collect incoming values into a Seq that will be pushed into the returned Flux on complete only.
-
final
def
bufferTimeout[C <: ListBuffer[T]](maxSize: Int, timespan: Duration, bufferSupplier: () ⇒ C): Flux[Seq[T]]
Collect incoming values into a Seq that will be pushed into the returned Flux every timespan OR maxSize items.
Collect incoming values into a Seq that will be pushed into the returned Flux every timespan OR maxSize items.
- C
the supplied Seq type
- maxSize
the max collected size
- timespan
the timeout to use to release a buffered list
- bufferSupplier
the collection to use for each data segment
- returns
a microbatched Flux of Seq delimited by given size or a given period timeout
-
final
def
bufferTimeout(maxSize: Int, timespan: Duration): Flux[Seq[T]]
Collect incoming values into a Seq that will be pushed into the returned Flux every timespan OR maxSize items.
-
final
def
bufferUntil(predicate: (T) ⇒ Boolean, cutBefore: Boolean): Flux[Seq[T]]
Collect incoming values into multiple Seq that will be pushed into the returned Flux each time the given predicate returns true.
Collect incoming values into multiple Seq that will be pushed into the returned Flux each time the given predicate returns true. Note that the buffer into which the element that triggers the predicate to return true (and thus closes a buffer) is included depends on the
cutBeforeparameter: set it to true to include the boundary element in the newly opened buffer, false to include it in the closed buffer (as in Flux.bufferUntil).
On completion, if the latest buffer is non-empty and has not been closed it is emitted. However, such a "partial" buffer isn't emitted in case of onError termination.
- predicate
a predicate that triggers the next buffer when it becomes true.
- cutBefore
set to true to include the triggering element in the new buffer rather than the old.
- returns
a microbatched Flux of Seq
-
final
def
bufferUntil(predicate: (T) ⇒ Boolean): Flux[Seq[T]]
Collect incoming values into multiple Seq that will be pushed into the returned Flux each time the given predicate returns true.
Collect incoming values into multiple Seq that will be pushed into the returned Flux each time the given predicate returns true. Note that the element that triggers the predicate to return true (and thus closes a buffer) is included as last element in the emitted buffer.

On completion, if the latest buffer is non-empty and has not been closed it is emitted. However, such a "partial" buffer isn't emitted in case of onError termination.
- predicate
a predicate that triggers the next buffer when it becomes true.
- returns
a microbatched Flux of Seq
-
final
def
bufferWhen[U, V, C <: ListBuffer[T]](bucketOpening: Publisher[U], closeSelector: (U) ⇒ Publisher[V], bufferSupplier: () ⇒ C): Flux[Seq[T]]
Collect incoming values into multiple Seq delimited by the given Publisher signals.
Collect incoming values into multiple Seq delimited by the given Publisher signals. Each Seq bucket will last until the mapped Publisher receiving the boundary signal emits, thus releasing the bucket to the returned Flux.
When Open signal is strictly not overlapping Close signal : dropping buffers

When Open signal is strictly more frequent than Close signal : overlapping buffers

When Open signal is exactly coordinated with Close signal : exact buffers
- U
the element type of the bucket-opening sequence
- V
the element type of the bucket-closing sequence
- C
the supplied Seq type
- bucketOpening
a Publisher to subscribe to for creating new receiving bucket signals.
- closeSelector
a Publisher factory provided the opening signal and returning a Publisher to subscribe to for emitting relative bucket.
- bufferSupplier
the collection to use for each data segment
- returns
a microbatched Flux of Seq delimited by an opening Publisher and a relative closing Publisher
-
final
def
bufferWhen[U, V](bucketOpening: Publisher[U], closeSelector: (U) ⇒ Publisher[V]): Flux[Seq[T]]
Collect incoming values into multiple Seq delimited by the given Publisher signals.
Collect incoming values into multiple Seq delimited by the given Publisher signals. Each Seq bucket will last until the mapped Publisher receiving the boundary signal emits, thus releasing the bucket to the returned Flux.
When Open signal is strictly not overlapping Close signal : dropping buffers

When Open signal is strictly more frequent than Close signal : overlapping buffers

When Open signal is exactly coordinated with Close signal : exact buffers
- U
the element type of the bucket-opening sequence
- V
the element type of the bucket-closing sequence
- bucketOpening
a Publisher to subscribe to for creating new receiving bucket signals.
- closeSelector
a Publisher factory provided the opening signal and returning a Publisher to subscribe to for emitting relative bucket.
- returns
a microbatched Flux of Seq delimited by an opening Publisher and a relative closing Publisher
-
final
def
bufferWhile(predicate: (T) ⇒ Boolean): Flux[Seq[T]]
Collect incoming values into multiple Seq that will be pushed into the returned Flux.
Collect incoming values into multiple Seq that will be pushed into the returned Flux. Each buffer continues aggregating values while the given predicate returns true, and a new buffer is created as soon as the predicate returns false... Note that the element that triggers the predicate to return false (and thus closes a buffer) is NOT included in any emitted buffer.

On completion, if the latest buffer is non-empty and has not been closed it is emitted. However, such a "partial" buffer isn't emitted in case of onError termination.
- predicate
a predicate that triggers the next buffer when it becomes false.
- returns
a microbatched Flux of Seq
-
final
def
cache(history: Int, ttl: Duration): Flux[T]
Turn this Flux into a hot source and cache last emitted signals for further Subscriber.
-
final
def
cache(ttl: Duration): Flux[T]
Turn this Flux into a hot source and cache last emitted signals for further Subscriber.
-
final
def
cache(history: Int): Flux[T]
Turn this Flux into a hot source and cache last emitted signals for further Subscriber.
-
final
def
cache(): Flux[T]
Turn this Flux into a hot source and cache last emitted signals for further Subscriber.
-
final
def
cancelOn(scheduler: Scheduler): Flux[T]
Prepare this Flux so that subscribers will cancel from it on a specified Scheduler.
-
final
def
cast[E](clazz: Class[E]): Flux[E]
Cast the current Flux produced type into a target produced type.
-
final
def
checkpoint(description: String): Flux[T]
Activate assembly tracing for this particular Flux and give it a description that will be reflected in the assembly traceback in case of an error upstream of the checkpoint.
Activate assembly tracing for this particular Flux and give it a description that will be reflected in the assembly traceback in case of an error upstream of the checkpoint.
It should be placed towards the end of the reactive chain, as errors triggered downstream of it cannot be observed and augmented with assembly trace.
The description could for example be a meaningful name for the assembled flux or a wider correlation ID.
- description
a description to include in the assembly traceback.
- returns
the assembly tracing Flux.
-
final
def
checkpoint(): Flux[T]
Activate assembly tracing for this particular Flux, in case of an error upstream of the checkpoint.
-
def
clone(): AnyRef
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
final
def
collect[E](containerSupplier: () ⇒ E, collector: (E, T) ⇒ Unit): Mono[E]
Collect the Flux sequence with the given collector and supplied container on subscribe.
Collect the Flux sequence with the given collector and supplied container on subscribe. The collected result will be emitted when this sequence completes.
-
final
def
collectMap[K, V](keyExtractor: (T) ⇒ K, valueExtractor: (T) ⇒ V, mapSupplier: () ⇒ Map[K, V]): Mono[Map[K, V]]
Convert all this Flux sequence into a supplied map where the key is extracted by the given function and the value will be the most recent extracted item for this key.
Convert all this Flux sequence into a supplied map where the key is extracted by the given function and the value will be the most recent extracted item for this key.
- K
the key extracted from each value of this Flux instance
- V
the value extracted from each value of this Flux instance
- keyExtractor
a Function1 to route items into a keyed Traversable
- valueExtractor
a Function1 to select the data to store from each item
- mapSupplier
a mutable.Map factory called for each Subscriber
- returns
-
final
def
collectMap[K, V](keyExtractor: (T) ⇒ K, valueExtractor: (T) ⇒ V): Mono[Map[K, V]]
Convert all this Flux sequence into a hashed map where the key is extracted by the given function and the value will be the most recent extracted item for this key.
Convert all this Flux sequence into a hashed map where the key is extracted by the given function and the value will be the most recent extracted item for this key.
- K
the key extracted from each value of this Flux instance
- V
the value extracted from each value of this Flux instance
- keyExtractor
a Function1 to route items into a keyed Traversable
- valueExtractor
a Function1 to select the data to store from each item
- returns
-
final
def
collectMap[K](keyExtractor: (T) ⇒ K): Mono[Map[K, T]]
Convert all this Flux sequence into a hashed map where the key is extracted by the given Function1 and the value will be the most recent emitted item for this key.
Convert all this Flux sequence into a hashed map where the key is extracted by the given Function1 and the value will be the most recent emitted item for this key.
-
final
def
collectMultimap[K, V](keyExtractor: (T) ⇒ K, valueExtractor: (T) ⇒ V, mapSupplier: () ⇒ Map[K, Collection[V]]): Mono[Map[K, Traversable[V]]]
Convert this Flux sequence into a supplied map where the key is extracted by the given function and the value will be all the extracted items for this key.
Convert this Flux sequence into a supplied map where the key is extracted by the given function and the value will be all the extracted items for this key.
- K
the key extracted from each value of this Flux instance
- V
the value extracted from each value of this Flux instance
- keyExtractor
a Function1 to route items into a keyed Traversable
- valueExtractor
a Function1 to select the data to store from each item
- mapSupplier
a Map factory called for each Subscriber
- returns
-
final
def
collectMultimap[K, V](keyExtractor: (T) ⇒ K, valueExtractor: (T) ⇒ V): Mono[Map[K, Traversable[V]]]
Convert this Flux sequence into a hashed map where the key is extracted by the given function and the value will be all the extracted items for this key.
Convert this Flux sequence into a hashed map where the key is extracted by the given function and the value will be all the extracted items for this key.
-
final
def
collectMultimap[K](keyExtractor: (T) ⇒ K): Mono[Map[K, Traversable[T]]]
Convert this Flux sequence into a hashed map where the key is extracted by the given function and the value will be all the emitted item for this key.
Convert this Flux sequence into a hashed map where the key is extracted by the given function and the value will be all the emitted item for this key.
-
final
def
collectSeq(): Mono[Seq[T]]
Accumulate this Flux sequence in a Seq that is emitted to the returned Mono on onComplete.
-
final
def
collectSortedSeq(ordering: Ordering[T]): Mono[Seq[T]]
Accumulate and sort using the given comparator this Flux sequence in a Seq that is emitted to the returned Mono on onComplete.
-
final
def
collectSortedSeq(): Mono[Seq[T]]
Accumulate and sort this Flux sequence in a Seq that is emitted to the returned Mono on onComplete.
-
final
def
compose[V](transformer: (Flux[T]) ⇒ Publisher[V]): Flux[V]
Defer the transformation of this Flux in order to generate a target Flux for each new Subscriber.
Defer the transformation of this Flux in order to generate a target Flux for each new Subscriber.
flux.compose(Mono::from).subscribe()- V
the item type in the returned Publisher
- transformer
the Function1 to map this Flux into a target Publisher instance for each new subscriber
- returns
a new Flux
- See also
Flux.transform for immmediate transformation of Flux
Flux.as for a loose conversion to an arbitrary type
-
final
def
concatMap[V](mapper: (T) ⇒ Publisher[_ <: V], prefetch: Int): Flux[V]
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave). Errors will immediately short circuit current concat backlog.
- V
the produced concatenated type
- mapper
the function to transform this sequence of T into concatenated sequences of V
- prefetch
the inner source produced demand
- returns
a concatenated Flux
-
final
def
concatMap[V](mapper: (T) ⇒ Publisher[_ <: V]): Flux[V]
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave). Errors will immediately short circuit current concat backlog.
- V
the produced concatenated type
- mapper
the function to transform this sequence of T into concatenated sequences of V
- returns
a concatenated Flux
-
final
def
concatMapDelayError[V](mapper: (T) ⇒ Publisher[_ <: V], delayUntilEnd: Boolean, prefetch: Int): Flux[V]
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).
Errors will be delayed after the current concat backlog if delayUntilEnd is false or after all sources if delayUntilEnd is true.
- V
the produced concatenated type
- mapper
the function to transform this sequence of T into concatenated sequences of V
- delayUntilEnd
delay error until all sources have been consumed instead of after the current source
- prefetch
the inner source produced demand
- returns
a concatenated Flux
-
final
def
concatMapDelayError[V](mapper: (T) ⇒ Publisher[_ <: V], prefetch: Int): Flux[V]
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).
Errors will be delayed after all concated sources terminate.
- V
the produced concatenated type
- mapper
the function to transform this sequence of T into concatenated sequences of V
- prefetch
the inner source produced demand
- returns
a concatenated Flux
-
final
def
concatMapDelayError[V](mapper: (T) ⇒ Publisher[_ <: V]): Flux[V]
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).
Errors will be delayed after the current concat backlog.
- V
the produced concatenated type
- mapper
the function to transform this sequence of T into concatenated sequences of V
- returns
a concatenated Flux
-
final
def
concatMapIterable[R](mapper: (T) ⇒ Iterable[_ <: R], prefetch: Int): Flux[R]
Bind Iterable sequences given this input sequence like Flux.flatMapIterable, but preserve ordering and concatenate emissions instead of merging (no interleave).
Bind Iterable sequences given this input sequence like Flux.flatMapIterable, but preserve ordering and concatenate emissions instead of merging (no interleave).
Errors will be delayed after the current concat backlog.
- R
the produced concatenated type
- mapper
the function to transform this sequence of T into concatenated sequences of R
- prefetch
the inner source produced demand
- returns
a concatenated Flux
-
final
def
concatMapIterable[R](mapper: (T) ⇒ Iterable[_ <: R]): Flux[R]
Bind Iterable sequences given this input sequence like Flux.flatMapIterable, but preserve ordering and concatenate emissions instead of merging (no interleave).
Bind Iterable sequences given this input sequence like Flux.flatMapIterable, but preserve ordering and concatenate emissions instead of merging (no interleave).
Errors will be delayed after the current concat backlog.
- R
the produced concatenated type
- mapper
the function to transform this sequence of T into concatenated sequences of R
- returns
a concatenated Flux
-
final
def
concatWith(other: Publisher[_ <: T]): Flux[T]
Concatenate emissions of this Flux with the provided Publisher (no interleave).
-
def
count(): Mono[Long]
Counts the number of values in this Flux.
-
final
def
defaultIfEmpty(defaultV: T): Flux[T]
Provide a default unique value if this sequence is completed without any data
Provide a default unique value if this sequence is completed without any data

- defaultV
the alternate value if this sequence is empty
- returns
a new Flux
-
final
def
delayElements(delay: Duration, timer: Scheduler): Flux[T]
Delay each of this Flux elements (Subscriber.onNext signals) by a given duration, on a given Scheduler.
Delay each of this Flux elements (Subscriber.onNext signals) by a given duration, on a given Scheduler.
- delay
duration to delay each Subscriber.onNext signal
- timer
the Scheduler to use for delaying each signal
- returns
a delayed Flux
- See also
#delaySubscription(Duration) delaySubscription to introduce a delay at the beginning of the sequence only
-
final
def
delayElements(delay: Duration): Flux[T]
Delay each of this Flux elements (Subscriber.onNext signals) by a given duration.
-
final
def
delaySequence(delay: Duration, timer: Scheduler): Flux[T]
Shift this Flux forward in time by a given Duration.
Shift this Flux forward in time by a given Duration. Unlike with Flux.delayElements(Duration), elements are shifted forward in time as they are emitted, always resulting in the delay between two elements being the same as in the source (only the first element is visibly delayed from the previous event, that is the subscription). Signals are delayed and continue on an user-specified Scheduler, but empty sequences or immediate error signals are not delayed.
With this operator, a source emitting at 10Hz with a delaySequence Duration of 1s will still emit at 10Hz, with an initial "hiccup" of 1s. On the other hand, Flux.delayElements(Duration) would end up emitting at 1Hz.
This is closer to Flux.delaySubscription(Duration), except the source is subscribed to immediately.
- delay
Duration to shift the sequence by
- timer
a time-capable Scheduler instance to delay signals on
- returns
an shifted Flux emitting at the same frequency as the source
-
final
def
delaySequence(delay: Duration): Flux[T]
Shift this Flux forward in time by a given Duration.
Shift this Flux forward in time by a given Duration. Unlike with Flux.delayElements(Duration), elements are shifted forward in time as they are emitted, always resulting in the delay between two elements being the same as in the source (only the first element is visibly delayed from the previous event, that is the subscription). Signals are delayed and continue on the parallel Scheduler, but empty sequences or immediate error signals are not delayed.
With this operator, a source emitting at 10Hz with a delaySequence Duration of 1s will still emit at 10Hz, with an initial "hiccup" of 1s. On the other hand, Flux.delayElements(Duration) would end up emitting at 1Hz.
This is closer to Flux.delaySubscription(Duration), except the source is subscribed to immediately.
- delay
Duration to shift the sequence by
- returns
an shifted Flux emitting at the same frequency as the source
-
final
def
delaySubscription[U](subscriptionDelay: Publisher[U]): Flux[T]
Delay the subscription to the main source until another Publisher signals a value or completes.
Delay the subscription to the main source until another Publisher signals a value or completes.
- U
the other source type
- subscriptionDelay
a Publisher to signal by next or complete this Flux.subscribe
- returns
a delayed Flux
-
final
def
delaySubscription(delay: Duration, timer: Scheduler): Flux[T]
Delay the subscription to this Flux source until the given period elapses.
Delay the subscription to this Flux source until the given period elapses.
-
final
def
delaySubscription(delay: Duration): Flux[T]
Delay the subscription to this Flux source until the given period elapses.
Delay the subscription to this Flux source until the given period elapses.
-
final
def
dematerialize[X](): Flux[X]
A "phantom-operator" working only if this Flux is a emits onNext, onError or onComplete reactor.core.publisher.Signal.
A "phantom-operator" working only if this Flux is a emits onNext, onError or onComplete reactor.core.publisher.Signal. The relative Subscriber callback will be invoked, error reactor.core.publisher.Signal will trigger onError and complete reactor.core.publisher.Signal will trigger onComplete.
- X
the dematerialized type
- returns
a dematerialized Flux
-
final
def
distinct[V](keySelector: (T) ⇒ V): Flux[T]
For each Subscriber, tracks this Flux values that have been seen and filters out duplicates given the extracted key.
-
final
def
distinct(): Flux[T]
For each Subscriber, tracks this Flux values that have been seen and filters out duplicates.
-
final
def
distinctUntilChanged[V](keySelector: (T) ⇒ V, keyComparator: (V, V) ⇒ Boolean): Flux[T]
Filter out subsequent repetitions of an element (that is, if they arrive right after one another), as compared by a key extracted through the user provided Function1 and then comparing keys with the supplied Function2.
Filter out subsequent repetitions of an element (that is, if they arrive right after one another), as compared by a key extracted through the user provided Function1 and then comparing keys with the supplied Function2.
- V
the type of the key extracted from each value in this sequence
- keySelector
function to compute comparison key for each element
- keyComparator
predicate used to compare keys.
- returns
a filtering Flux with only one occurrence in a row of each element of the same key for which the predicate returns true (yet element keys can repeat in the overall sequence)
-
final
def
distinctUntilChanged[V](keySelector: (T) ⇒ V): Flux[T]
Filters out subsequent and repeated elements provided a matching extracted key.
Filters out subsequent and repeated elements provided a matching extracted key.
- V
the type of the key extracted from each value in this sequence
- keySelector
function to compute comparison key for each element
- returns
a filtering Flux with conflated repeated elements given a comparison key
-
final
def
distinctUntilChanged(): Flux[T]
Filters out subsequent and repeated elements.
Filters out subsequent and repeated elements.
- returns
a filtering Flux with conflated repeated elements
-
final
def
doAfterTerminate(afterTerminate: () ⇒ Unit): Flux[T]
Triggered after the Flux terminates, either by completing downstream successfully or with an error.
- final def doFinally(onFinally: (SignalType) ⇒ Unit): Flux[T]
-
final
def
doOnCancel(onCancel: () ⇒ Unit): Flux[T]
Triggered when the Flux is cancelled.
-
final
def
doOnComplete(onComplete: () ⇒ Unit): Flux[T]
Triggered when the Flux completes successfully.
-
final
def
doOnEach(signalConsumer: (Signal[T]) ⇒ Unit): Flux[T]
Triggers side-effects when the Flux emits an item, fails with an error or completes successfully.
Triggers side-effects when the Flux emits an item, fails with an error or completes successfully. All these events are represented as a Signal that is passed to the side-effect callback. Note that this is an advanced operator, typically used for monitoring of a Flux.
- signalConsumer
the mandatory callback to call on Subscriber.onNext, Subscriber.onError and Subscriber#onComplete
- returns
an observed Flux
-
final
def
doOnError(predicate: (Throwable) ⇒ Boolean, onError: (Throwable) ⇒ Unit): Flux[T]
Triggered when the Flux completes with an error matching the given exception.
-
final
def
doOnError[E <: Throwable](exceptionType: Class[E], onError: (E) ⇒ Unit): Flux[T]
Triggered when the Flux completes with an error matching the given exception type.
-
final
def
doOnError(onError: (Throwable) ⇒ Unit): Flux[T]
Triggered when the Flux completes with an error.
-
final
def
doOnNext(onNext: (T) ⇒ Unit): Flux[T]
Triggered when the Flux emits an item.
-
final
def
doOnRequest(consumer: (Long) ⇒ Unit): Flux[T]
Attach a Long customer to this Flux that will observe any request to this Flux.
-
final
def
doOnSubscribe(onSubscribe: (Subscription) ⇒ Unit): Flux[T]
Triggered when the Flux is subscribed.
-
final
def
doOnTerminate(onTerminate: () ⇒ Unit): Flux[T]
Triggered when the Flux terminates, either by completing successfully or with an error.
-
final
def
elapsed(scheduler: Scheduler): Flux[(Long, T)]
Map this Flux sequence into Tuple2 of T1 Long timemillis and T2
Tassociated data.Map this Flux sequence into Tuple2 of T1 Long timemillis and T2
Tassociated data. The timemillis corresponds to the elapsed time between the subscribe and the first next signal OR between two next signals.
- scheduler
a Scheduler instance to read time from
- returns
a transforming Flux that emits tuples of time elapsed in milliseconds and matching data
-
final
def
elapsed(): Flux[(Long, T)]
Map this Flux sequence into Tuple2 of T1 Long timemillis and T2
Tassociated data.Map this Flux sequence into Tuple2 of T1 Long timemillis and T2
Tassociated data. The timemillis corresponds to the elapsed time between the subscribe and the first next signal OR between two next signals.
- returns
a transforming Flux that emits tuples of time elapsed in milliseconds and matching data
-
final
def
elementAt(index: Int, defaultValue: T): Mono[T]
Emit only the element at the given index position or signals a default value if specified if the sequence is shorter.
Emit only the element at the given index position or signals a default value if specified if the sequence is shorter.
- index
index of an item
- defaultValue
supply a default value if not found
- returns
a Mono of the item at a specified index or a default value
-
final
def
elementAt(index: Int): Mono[T]
Emit only the element at the given index position or IndexOutOfBoundsException if the sequence is shorter.
Emit only the element at the given index position or IndexOutOfBoundsException if the sequence is shorter.
- index
index of an item
- returns
a Mono of the item at a specified index
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
expand(expander: (T) ⇒ Publisher[_ <: T]): Flux[T]
Recursively expand elements into a graph and emit all the resulting element using a breadth-first traversal strategy.
Recursively expand elements into a graph and emit all the resulting element using a breadth-first traversal strategy.
That is: emit the values from this Flux first, then expand each at a first level of recursion and emit all of the resulting values, then expand all of these at a second level and so on..
For example, given the hierarchical structure
A - AA - aa1 B - BB - bb1Expands
Flux.just(A, B)intoA B AA AB aa1 bb1
- expander
the Function1 applied at each level of recursion to expand values into a Publisher, producing a graph.
- returns
an breadth-first expanded Flux
-
final
def
expand(expander: (T) ⇒ Publisher[_ <: T], capacityHint: Int): Flux[T]
Recursively expand elements into a graph and emit all the resulting element using a breadth-first traversal strategy.
Recursively expand elements into a graph and emit all the resulting element using a breadth-first traversal strategy.
That is: emit the values from this Flux first, then expand each at a first level of recursion and emit all of the resulting values, then expand all of these at a second level and so on..
For example, given the hierarchical structure
A - AA - aa1 B - BB - bb1Expands
Flux.just(A, B)intoA B AA AB aa1 bb1
- expander
the Function1 applied at each level of recursion to expand values into a Publisher, producing a graph.
- capacityHint
a capacity hint to prepare the inner queues to accommodate n elements per level of recursion.
- returns
an breadth-first expanded Flux
-
final
def
expandDeep(expander: (T) ⇒ Publisher[_ <: T]): Flux[T]
Recursively expand elements into a graph and emit all the resulting element, in a depth-first traversal order.
Recursively expand elements into a graph and emit all the resulting element, in a depth-first traversal order.
That is: emit one value from this Flux, expand it and emit the first value at this first level of recursion, and so on... When no more recursion is possible, backtrack to the previous level and re-apply the strategy.
For example, given the hierarchical structure
A - AA - aa1 B - BB - bb1Expands
Flux.just(A, B)intoA AA aa1 B BB bb1
- expander
the Function1 applied at each level of recursion to expand values into a Publisher, producing a graph.
- returns
a Flux expanded depth-first
-
final
def
expandDeep(expander: (T) ⇒ Publisher[_ <: T], capacityHint: Int): Flux[T]
Recursively expand elements into a graph and emit all the resulting element, in a depth-first traversal order.
Recursively expand elements into a graph and emit all the resulting element, in a depth-first traversal order.
That is: emit one value from this Flux, expand it and emit the first value at this first level of recursion, and so on... When no more recursion is possible, backtrack to the previous level and re-apply the strategy.
For example, given the hierarchical structure
A - AA - aa1 B - BB - bb1Expands
Flux.just(A, B)intoA AA aa1 B BB bb1
- expander
the Function1 applied at each level of recursion to expand values into a Publisher, producing a graph.
- capacityHint
a capacity hint to prepare the inner queues to accommodate n elements per level of recursion.
- returns
a Flux expanded depth-first
-
final
def
filter(p: (T) ⇒ Boolean): Flux[T]
Evaluate each accepted value against the given predicate T => Boolean.
Evaluate each accepted value against the given predicate T => Boolean. If the predicate test succeeds, the value is passed into the new Flux. If the predicate test fails, the value is ignored and a request of 1 is emitted.
- p
the Function1 predicate to test values against
- returns
a new Flux containing only values that pass the predicate test
-
final
def
filterWhen(asyncPredicate: Function1[T, _ <: Publisher[Boolean] with MapablePublisher[Boolean]], bufferSize: Int): Flux[T]
Test each value emitted by this Flux asynchronously using a generated Publisher[Boolean] test.
Test each value emitted by this Flux asynchronously using a generated Publisher[Boolean] test. A value is replayed if the first item emitted by its corresponding test is
true. It is dropped if its test is either empty or its first emitted value isfalse.Note that only the first value of the test publisher is considered, and unless it is a Mono, test will be cancelled after receiving that first value. Test publishers are generated and subscribed to in sequence.
- asyncPredicate
the function generating a Publisher of Boolean for each value, to filter the Flux with
- bufferSize
the maximum expected number of values to hold pending a result of their respective asynchronous predicates, rounded to the next power of two. This is capped depending on the size of the heap and the JVM limits, so be careful with large values (although eg. { @literal 65536} should still be fine). Also serves as the initial request size for the source.
- returns
a filtered Flux
-
final
def
filterWhen(asyncPredicate: Function1[T, _ <: Publisher[Boolean] with MapablePublisher[Boolean]]): Flux[T]
Test each value emitted by this Flux asynchronously using a generated Publisher[Boolean] test.
Test each value emitted by this Flux asynchronously using a generated Publisher[Boolean] test. A value is replayed if the first item emitted by its corresponding test is
true. It is dropped if its test is either empty or its first emitted value isfalse.Note that only the first value of the test publisher is considered, and unless it is a Mono, test will be cancelled after receiving that first value. Test publishers are generated and subscribed to in sequence.
- asyncPredicate
the function generating a Publisher of Boolean for each value, to filter the Flux with
- returns
a filtered Flux
-
def
finalize(): Unit
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
final
def
flatMap[R](mapperOnNext: (T) ⇒ Publisher[_ <: R], mapperOnError: (Throwable) ⇒ Publisher[_ <: R], mapperOnComplete: () ⇒ Publisher[_ <: R]): Flux[R]
Transform the signals emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.
Transform the signals emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave. OnError will be transformed into completion signal after its mapping callback has been applied.

- R
the output Publisher type target
- mapperOnNext
the Function1 to call on next data and returning a sequence to merge
- mapperOnError
the Function to call on error signal and returning a sequence to merge
- mapperOnComplete
the Function1 to call on complete signal and returning a sequence to merge
- returns
a new Flux
-
final
def
flatMap[V](mapper: (T) ⇒ Publisher[_ <: V], concurrency: Int, prefetch: Int): Flux[V]
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave. The concurrency argument allows to control how many merged Publisher can happen in parallel. The prefetch argument allows to give an arbitrary prefetch size to the merged Publisher.
-
final
def
flatMap[V](mapper: (T) ⇒ Publisher[_ <: V], concurrency: Int): Flux[V]
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.
-
final
def
flatMap[R](mapper: (T) ⇒ Publisher[_ <: R]): Flux[R]
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.
-
final
def
flatMapDelayError[V](mapper: (T) ⇒ Publisher[_ <: V], concurrency: Int, prefetch: Int): Flux[V]
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave. The concurrency argument allows to control how many merged Publisher can happen in parallel. The prefetch argument allows to give an arbitrary prefetch size to the merged Publisher. This variant will delay any error until after the rest of the flatMap backlog has been processed.
-
final
def
flatMapIterable[R](mapper: (T) ⇒ Iterable[_ <: R], prefetch: Int): Flux[R]
Transform the items emitted by this Flux into Iterable, then flatten the emissions from those by merging them into a single Flux.
Transform the items emitted by this Flux into Iterable, then flatten the emissions from those by merging them into a single Flux. The prefetch argument allows to give an arbitrary prefetch size to the merged Iterable.
- R
the merged output sequence type
- mapper
the Function1 to transform input sequence into N sequences Iterable
- prefetch
the maximum in-flight elements from each inner Iterable sequence
- returns
a merged Flux
-
final
def
flatMapIterable[R](mapper: (T) ⇒ Iterable[_ <: R]): Flux[R]
Transform the items emitted by this Flux into Iterable, then flatten the elements from those by merging them into a single Flux.
Transform the items emitted by this Flux into Iterable, then flatten the elements from those by merging them into a single Flux. The prefetch argument allows to give an arbitrary prefetch size to the merged Iterable.
- R
the merged output sequence type
- mapper
the Function1 to transform input sequence into N sequences Iterable
- returns
a merged Flux
-
final
def
flatMapSequential[R](mapper: (T) ⇒ Publisher[_ <: R], maxConcurrency: Int, prefetch: Int): Flux[R]
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order.
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order. Unlike concatMap, transformed inner Publishers are subscribed to eagerly. Unlike flatMap, their emitted elements are merged respecting the order of the original sequence. The concurrency argument allows to control how many merged Publisher can happen in parallel. The prefetch argument allows to give an arbitrary prefetch size to the merged Publisher.
-
final
def
flatMapSequential[R](mapper: (T) ⇒ Publisher[_ <: R], maxConcurrency: Int): Flux[R]
Transform the items emitted by this Flux Flux} into Publishers, then flatten the emissions from those by merging them into a single Flux, in order.
Transform the items emitted by this Flux Flux} into Publishers, then flatten the emissions from those by merging them into a single Flux, in order. Unlike concatMap, transformed inner Publishers are subscribed to eagerly. Unlike flatMap, their emitted elements are merged respecting the order of the original sequence. The concurrency argument allows to control how many merged Publisher can happen in parallel.
-
final
def
flatMapSequential[R](mapper: (T) ⇒ Publisher[_ <: R]): Flux[R]
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order.
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order. Unlike concatMap, transformed inner Publishers are subscribed to eagerly. Unlike flatMap, their emitted elements are merged respecting the order of the original sequence.
- R
the merged output sequence type
- mapper
the Function1 to transform input sequence into N sequences Publisher
- returns
a merged Flux
-
final
def
flatMapSequentialDelayError[R](mapper: (T) ⇒ Publisher[_ <: R], maxConcurrency: Int, prefetch: Int): Flux[R]
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order.
Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order. Unlike concatMap, transformed inner Publishers are subscribed to eagerly. Unlike flatMap, their emitted elements are merged respecting the order of the original sequence. The concurrency argument allows to control how many merged Publisher can happen in parallel. The prefetch argument allows to give an arbitrary prefetch size to the merged Publisher. This variant will delay any error until after the rest of the flatMap backlog has been processed.
- R
the merged output sequence type
- mapper
the Function1 to transform input sequence into N sequences Publisher
- maxConcurrency
the maximum in-flight elements from this Flux sequence
- prefetch
the maximum in-flight elements from each inner Publisher sequence
- returns
a merged Flux, subscribing early but keeping the original ordering
-
final
def
flatten[S](implicit ev: <:<[T, Flux[S]]): Flux[S]
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).
Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave). Errors will immediately short circuit current concat backlog.
Alias for concatMap.
- returns
a concatenated Flux
- Definition Classes
- FluxLike
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
getPrefetch: Long
The prefetch configuration of the Flux
-
final
def
groupBy[K, V](keyMapper: (T) ⇒ K, valueMapper: (T) ⇒ V, prefetch: Int): Flux[GroupedFlux[K, V]]
Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.
Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper. It will use the given value mapper to extract the element to route.
- K
the key type extracted from each value of this sequence
- V
the value type extracted from each value of this sequence
- keyMapper
the key mapping function that evaluates an incoming data and returns a key.
- valueMapper
the value mapping function that evaluates which data to extract for re-routing.
- prefetch
the number of values to prefetch from the source
- returns
a Flux of GroupedFlux grouped sequences
-
final
def
groupBy[K, V](keyMapper: (T) ⇒ K, valueMapper: (T) ⇒ V): Flux[GroupedFlux[K, V]]
Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.
Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper. It will use the given value mapper to extract the element to route.
- K
the key type extracted from each value of this sequence
- V
the value type extracted from each value of this sequence
- keyMapper
the key mapping function that evaluates an incoming data and returns a key.
- valueMapper
the value mapping function that evaluates which data to extract for re-routing.
- returns
a Flux of GroupedFlux grouped sequences
-
final
def
groupBy[K](keyMapper: (T) ⇒ K, prefetch: Int): Flux[GroupedFlux[K, T]]
Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.
Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.
- K
the key type extracted from each value of this sequence
- keyMapper
the key mapping Function1 that evaluates an incoming data and returns a key.
- prefetch
the number of values to prefetch from the source
- returns
a Flux of GroupedFlux grouped sequences
-
final
def
groupBy[K](keyMapper: (T) ⇒ K): Flux[GroupedFlux[K, T]]
Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.
Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.
- K
the key type extracted from each value of this sequence
- keyMapper
the key mapping Function1 that evaluates an incoming data and returns a key.
- returns
a Flux of GroupedFlux grouped sequences
-
final
def
groupJoin[TRight, TLeftEnd, TRightEnd, R](other: Publisher[_ <: TRight], leftEnd: (T) ⇒ Publisher[TLeftEnd], rightEnd: (TRight) ⇒ Publisher[TRightEnd], resultSelector: (T, Flux[TRight]) ⇒ R): Flux[R]
Returns a Flux that correlates two Publishers when they overlap in time and groups the results.
Returns a Flux that correlates two Publishers when they overlap in time and groups the results.
There are no guarantees in what order the items get combined when multiple items from one or both source Publishers overlap.
Unlike Flux.join, items from the right Publisher will be streamed into the right resultSelector argument Flux.
- TRight
the type of the right Publisher
- TLeftEnd
this Flux timeout type
- TRightEnd
the right Publisher timeout type
- R
the combined result type
- other
the other Publisher to correlate items from the source Publisher with
- leftEnd
a function that returns a Publisher whose emissions indicate the duration of the values of the source Publisher
- rightEnd
a function that returns a Publisher whose emissions indicate the duration of the values of the
rightPublisher- resultSelector
a function that takes an item emitted by each Publisher and returns the value to be emitted by the resulting Publisher
- returns
a joining Flux
-
final
def
handle[R](handler: (T, SynchronousSink[R]) ⇒ Unit): Flux[R]
Handle the items emitted by this Flux by calling a biconsumer with the output sink for each onNext.
-
final
def
hasElement(value: T): Mono[Boolean]
Emit a single boolean true if any of the values of this Flux sequence match the constant.
Emit a single boolean true if any of the values of this Flux sequence match the constant.
The implementation uses short-circuit logic and completes with true if the constant matches a value.
- value
constant compared to incoming signals
- returns
a new Mono with
trueif any value satisfies a predicate andfalseotherwise
-
final
def
hasElements(): Mono[Boolean]
Emit a single boolean true if this Flux sequence has at least one element.
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
final
def
hide(): Flux[T]
Hides the identities of this Flux and its Subscription as well.
-
final
def
ignoreElements(): Mono[T]
Ignores onNext signals (dropping them) and only reacts on termination.
Ignores onNext signals (dropping them) and only reacts on termination.

- returns
a new completable Mono.
-
final
def
index[I](indexMapper: (Long, T) ⇒ I): Flux[I]
Keep information about the order in which source values were received by indexing them internally with a 0-based incrementing long then combining this information with the source value into a
Iusing the provided Function2 , returning a Flux[I].Keep information about the order in which source values were received by indexing them internally with a 0-based incrementing long then combining this information with the source value into a
Iusing the provided Function2 , returning a Flux[I].Typical usage would be to produce a scala.Tuple2 similar to Flux.index(), but 1-based instead of 0-based:
index((i, v) => (i+1, v))- indexMapper
the Function2 to use to combine elements and their index.
- returns
an indexed Flux with each source value combined with its computed index.
-
final
def
index(): Flux[(Long, T)]
Keep information about the order in which source values were received by indexing them with a 0-based incrementing long, returning a Flux of value
-
def
inners(): Stream[_ <: Scannable]
- Definition Classes
- Scannable
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
isScanAvailable: Boolean
- Definition Classes
- Scannable
- def jScannable: core.Scannable
-
final
def
join[TRight, TLeftEnd, TRightEnd, R](other: Publisher[_ <: TRight], leftEnd: (T) ⇒ Publisher[TLeftEnd], rightEnd: (TRight) ⇒ Publisher[TRightEnd], resultSelector: (T, TRight) ⇒ R): Flux[R]
Returns a Flux that correlates two Publishers when they overlap in time and groups the results.
Returns a Flux that correlates two Publishers when they overlap in time and groups the results.
There are no guarantees in what order the items get combined when multiple items from one or both source Publishers overlap.
- TRight
the type of the right Publisher
- TLeftEnd
this Flux timeout type
- TRightEnd
the right Publisher timeout type
- R
the combined result type
- other
the other Publisher to correlate items from the source Publisher with
- leftEnd
a function that returns a Publisher whose emissions indicate the duration of the values of the source Publisher
- rightEnd
a function that returns a Publisher whose emissions indicate the duration of the values of the { @code right} Publisher
- resultSelector
a function that takes an item emitted by each Publisher and returns the value to be emitted by the resulting Publisher
- returns
a joining Flux
-
final
def
last(defaultValue: T): Mono[T]
Signal the last element observed before complete signal or emit the defaultValue if empty.
Signal the last element observed before complete signal or emit the defaultValue if empty. For a passive version use Flux.takeLast
-
final
def
last(): Mono[T]
Signal the last element observed before complete signal or emit NoSuchElementException error if the source was empty.
Signal the last element observed before complete signal or emit NoSuchElementException error if the source was empty. For a passive version use Flux.takeLast
- returns
a limited Flux
-
final
def
limitRate(prefetchRate: Int): Flux[T]
Ensure that backpressure signals from downstream subscribers are capped at the provided
prefetchRatewhen propagated upstream, effectively rate limiting the upstream Publisher.Ensure that backpressure signals from downstream subscribers are capped at the provided
prefetchRatewhen propagated upstream, effectively rate limiting the upstream Publisher.Typically used for scenarios where consumer(s) request a large amount of data (eg.
Long.MaxValue) but the data source behaves better or can be optimized with smaller requests (eg. database paging, etc...). All data is still processed.Equivalent to
flux.publishOn(Schedulers.immediate(), prefetchRate).subscribe()- prefetchRate
the limit to apply to downstream's backpressure
- returns
a Flux limiting downstream's backpressure
- See also
-
final
def
log(category: String, level: Level, showOperatorLine: Boolean, options: SignalType*): Flux[T]
Observe Reactive Streams signals matching the passed filter
optionsand use Logger support to handle trace implementation.Observe Reactive Streams signals matching the passed filter
optionsand use Logger support to handle trace implementation. Default will use the passed Level and java.util.logging. If SLF4J is available, it will be used instead.Options allow fine grained filtering of the traced signal, for instance to only capture onNext and onError:
flux.log("category", Level.INFO, SignalType.ON_NEXT, SignalType.ON_ERROR)
@param category to be mapped into logger configuration (e.g. org.springframework
.reactor). If category ends with "." like "reactor.", a generated operator
suffix will complete, e.g. "reactor.Flux.Map".
@param level the [[Level]] to enforce for this tracing Flux (only FINEST, FINE,
INFO, WARNING and SEVERE are taken into account)
@param showOperatorLine capture the current stack to display operator
class/line number.
@param options a vararg [[SignalType]] option to filter log messages
@return a new unaltered [[Flux]]
-
final
def
log(category: String, level: Level, options: SignalType*): Flux[T]
Observe Reactive Streams signals matching the passed filter
optionsand use Logger support to handle trace implementation.Observe Reactive Streams signals matching the passed filter
optionsand use Logger support to handle trace implementation. Default will use the passed Level and java.util.logging. If SLF4J is available, it will be used instead.Options allow fine grained filtering of the traced signal, for instance to only capture onNext and onError:
flux.log("category", Level.INFO, SignalType.ON_NEXT, SignalType.ON_ERROR)
@param category to be mapped into logger configuration (e.g. org.springframework
.reactor). If category ends with "." like "reactor.", a generated operator
suffix will complete, e.g. "reactor.Flux.Map".
@param level the [[Level]] to enforce for this tracing Flux (only FINEST, FINE,
INFO, WARNING and SEVERE are taken into account)
@param options a vararg [[SignalType]] option to filter log messages
@return a new unaltered [[Flux]]
-
final
def
log(category: String): Flux[T]
Observe all Reactive Streams signals and use Logger support to handle trace implementation.
Observe all Reactive Streams signals and use Logger support to handle trace implementation. Default will use Level.INFO and java.util.logging. If SLF4J is available, it will be used instead.

- category
to be mapped into logger configuration (e.g. org.springframework .reactor). If category ends with "." like "reactor.", a generated operator suffix will complete, e.g. "reactor.Flux.Map".
- returns
a new unaltered Flux
-
final
def
log(): Flux[T]
Observe all Reactive Streams signals and use Logger support to handle trace implementation.
Observe all Reactive Streams signals and use Logger support to handle trace implementation. Default will use Level.INFO and java.util.logging. If SLF4J is available, it will be used instead.

The default log category will be "reactor.*", a generated operator suffix will complete, e.g. "reactor.Flux.Map".
- returns
a new unaltered Flux
-
final
def
map[V](mapper: (T) ⇒ V): Flux[V]
Transform the items emitted by this Flux by applying a function to each item.
Transform the items emitted by this Flux by applying a function to each item.

- V
the transformed type
- mapper
the transforming Function1
- returns
a transformed Flux
- Definition Classes
- Flux → MapablePublisher
-
final
def
materialize(): Flux[Signal[T]]
Transform the incoming onNext, onError and onComplete signals into Signal.
Transform the incoming onNext, onError and onComplete signals into Signal. Since the error is materialized as a Signal, the propagation will be stopped and onComplete will be emitted. Complete signal will first emit a Signal.complete and then effectively complete the flux.
- returns
a Flux of materialized Signal
-
final
def
mergeWith(other: Publisher[_ <: T]): Flux[T]
Merge data from this Flux and a Publisher into an interleaved merged sequence.
Merge data from this Flux and a Publisher into an interleaved merged sequence. Unlike {concat, inner sources are subscribed to eagerly.

Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.
- other
the Publisher to merge with
- returns
a new Flux
-
final
def
name(name: String): Flux[T]
Give a name to this sequence, which can be retrieved using reactor.core.scala.Scannable.name() as long as this is the first reachable reactor.core.scala.Scannable.parents().
Give a name to this sequence, which can be retrieved using reactor.core.scala.Scannable.name() as long as this is the first reachable reactor.core.scala.Scannable.parents().
- name
a name for the sequence
- returns
the same sequence, but bearing a name
-
def
name: String
Check this Scannable and its Scannable.parents() for a name an return the first one that is reachable.
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
final
def
next(): Mono[T]
Emit only the first item emitted by this Flux.
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
ofType[U](clazz: Class[U]): Flux[U]
Evaluate each accepted value against the given Class type.
Evaluate each accepted value against the given Class type. If the predicate test succeeds, the value is passed into the new Flux. If the predicate test fails, the value is ignored and a request of 1 is emitted.
- clazz
the Class type to test values against
- returns
a new Flux reduced to items converted to the matched type
-
final
def
onBackpressureBuffer(maxSize: Int, onBufferOverflow: (T) ⇒ Unit, bufferOverflowStrategy: BufferOverflowStrategy): Flux[T]
Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream, within a
maxSizelimit.Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream, within a
maxSizelimit. Over that limit, the overflow strategy is applied (see BufferOverflowStrategy).A
Consumeris immediately invoked when there is an overflow, receiving the value that was discarded because of the overflow (which can be different from the latest element emitted by the source in case of a DROP_LATEST strategy).Note that for the ERROR strategy, the overflow error will be delayed after the current backlog is consumed. The consumer is still invoked immediately.
- maxSize
maximum buffer backlog size before overflow callback is called
- onBufferOverflow
callback to invoke on overflow
- bufferOverflowStrategy
strategy to apply to overflowing elements
- returns
a buffering Flux
-
final
def
onBackpressureBuffer(maxSize: Int, bufferOverflowStrategy: BufferOverflowStrategy): Flux[T]
Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream, within a
maxSizelimit.Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream, within a
maxSizelimit. Over that limit, the overflow strategy is applied (see BufferOverflowStrategy).Note that for the ERROR strategy, the overflow error will be delayed after the current backlog is consumed.
- maxSize
maximum buffer backlog size before overflow strategy is applied
- bufferOverflowStrategy
strategy to apply to overflowing elements
- returns
a buffering Flux
-
final
def
onBackpressureBuffer(maxSize: Int, onOverflow: (T) ⇒ Unit): Flux[T]
Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream.
Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream. Overflow error will be delayed after the current backlog is consumed. However the
onOverflowwill be immediately invoked.
- maxSize
maximum buffer backlog size before overflow callback is called
- onOverflow
callback to invoke on overflow
- returns
a buffering Flux
-
final
def
onBackpressureBuffer(maxSize: Int): Flux[T]
Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream.
-
final
def
onBackpressureBuffer(): Flux[T]
Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream.
-
final
def
onBackpressureDrop(onDropped: (T) ⇒ Unit): Flux[T]
Request an unbounded demand and push the returned Flux, or drop and notify dropping
consumerwith the observed elements if not enough demand is requested downstream. -
final
def
onBackpressureDrop(): Flux[T]
Request an unbounded demand and push the returned Flux, or drop the observed elements if not enough demand is requested downstream.
-
final
def
onBackpressureError(): Flux[T]
Request an unbounded demand and push the returned Flux, or emit onError fom reactor.core.Exceptions.failWithOverflow if not enough demand is requested downstream.
-
final
def
onBackpressureLatest(): Flux[T]
Request an unbounded demand and push the returned Flux, or only keep the most recent observed item if not enough demand is requested downstream.
-
final
def
onErrorMap(predicate: (Throwable) ⇒ Boolean, mapper: Function1[Throwable, _ <: Throwable]): Flux[T]
Transform the error emitted by this Flux by applying a function if the error matches the given predicate, otherwise let the error flow.
-
final
def
onErrorMap[E <: Throwable](type: Class[E], mapper: Function1[E, _ <: Throwable]): Flux[T]
Transform the error emitted by this Flux by applying a function if the error matches the given type, otherwise let the error flow.
Transform the error emitted by this Flux by applying a function if the error matches the given type, otherwise let the error flow.
<img class="marble" src="https://raw.githubusercontent.com/reactor/reactor-core/v3.1.0.RC1/src/docs/marble/maperror.png"
- E
the error type
- type
the class of the exception type to react to
- mapper
the error transforming Function1
- returns
a transformed Flux
-
final
def
onErrorMap(mapper: Function1[Throwable, _ <: Throwable]): Flux[T]
Transform the error emitted by this Flux by applying a function.
-
final
def
onErrorRecover[U <: T](pf: PartialFunction[Throwable, U]): Flux[T]
Returns a Flux that mirrors the behavior of the source, unless the source is terminated with an
onError, in which case the streaming of events fallbacks to a Flux emitting a single element generated by the backup function.Returns a Flux that mirrors the behavior of the source, unless the source is terminated with an
onError, in which case the streaming of events fallbacks to a Flux emitting a single element generated by the backup function.The created Flux mirrors the behavior of the source in case the source does not end with an error or if the thrown Throwable is not matched.
See onErrorResume for the version that takes a total function as a parameter.
- pf
- a function that matches errors with a backup element that is emitted when the source throws an error.
- Definition Classes
- FluxLike
-
final
def
onErrorRecoverWith[U <: T](pf: PartialFunction[Throwable, Flux[U]]): Flux[T]
Returns a Flux that mirrors the behavior of the source, unless the source is terminated with an
onError, in which case the streaming of events continues with the specified backup sequence generated by the given function.Returns a Flux that mirrors the behavior of the source, unless the source is terminated with an
onError, in which case the streaming of events continues with the specified backup sequence generated by the given function.The created Flux mirrors the behavior of the source in case the source does not end with an error or if the thrown Throwable is not matched.
See onErrorResume for the version that takes a total function as a parameter.
- pf
is a function that matches errors with a backup throwable that is subscribed when the source throws an error.
- Definition Classes
- FluxLike
-
final
def
onErrorResume(predicate: (Throwable) ⇒ Boolean, fallback: Function1[Throwable, _ <: Publisher[_ <: T]]): Flux[T]
Subscribe to a returned fallback publisher when an error matching the given type occurs.
Subscribe to a returned fallback publisher when an error matching the given type occurs.
alt="">
- predicate
the error predicate to match
- fallback
the Function1 mapping the error to a new Publisher sequence
- returns
a new Flux
-
final
def
onErrorResume[E <: Throwable](type: Class[E], fallback: Function1[E, _ <: Publisher[_ <: T]]): Flux[T]
Subscribe to a returned fallback publisher when an error matching the given type occurs.
Subscribe to a returned fallback publisher when an error matching the given type occurs.
alt="">
- E
the error type
- type
the error type to match
- fallback
the Function1 mapping the error to a new Publisher sequence
- returns
a new Flux
-
final
def
onErrorResume[U <: T](fallback: Function1[Throwable, _ <: Publisher[_ <: U]]): Flux[U]
Subscribe to a returned fallback publisher when any error occurs.
Subscribe to a returned fallback publisher when any error occurs.

- fallback
the Function1 mapping the error to a new Publisher sequence
- returns
a new Flux
-
final
def
onErrorReturn(predicate: (Throwable) ⇒ Boolean, fallbackValue: T): Flux[T]
Fallback to the given value if an error matching the given predicate is observed on this Flux
Fallback to the given value if an error matching the given predicate is observed on this Flux
- predicate
the error predicate to match
- fallbackValue
alternate value on fallback
- returns
a new Flux
- Definition Classes
- Flux → OnErrorReturn
-
final
def
onErrorReturn[E <: Throwable](type: Class[E], fallbackValue: T): Flux[T]
Fallback to the given value if an error of a given type is observed on this Flux
Fallback to the given value if an error of a given type is observed on this Flux
- E
the error type
- type
the error type to match
- fallbackValue
alternate value on fallback
- returns
a new Flux
- Definition Classes
- Flux → OnErrorReturn
-
final
def
onErrorReturn(fallbackValue: T): Flux[T]
Fallback to the given value if an error is observed on this Flux
Fallback to the given value if an error is observed on this Flux

- fallbackValue
alternate value on fallback
- returns
a new Flux
- Definition Classes
- Flux → OnErrorReturn
-
final
def
onTerminateDetach(): Flux[T]
Detaches the both the child Subscriber and the Subscription on termination or cancellation.
Detaches the both the child Subscriber and the Subscription on termination or cancellation.
This should help with odd retention scenarios when running with non-reactor Subscriber.
- returns
a detachable Flux
-
def
operatorName: String
Check this Scannable and its Scannable.parents() for a name an return the first one that is reachable.
-
final
def
or(other: Publisher[_ <: T]): Flux[T]
Pick the first Publisher between this Flux and another publisher to emit any signal (onNext/onError/onComplete) and replay all signals from that Publisher, effectively behaving like the fastest of these competing sources.
Pick the first Publisher between this Flux and another publisher to emit any signal (onNext/onError/onComplete) and replay all signals from that Publisher, effectively behaving like the fastest of these competing sources.

- other
the Publisher to race with
- returns
the fastest sequence
- See also
-
final
def
parallel(parallelism: Int, prefetch: Int): SParallelFlux[T]
Prepare to consume this Flux on parallelism number of 'rails' in round-robin fashion and use custom prefetch amount and queue for dealing with the source Flux's values.
Prepare to consume this Flux on parallelism number of 'rails' in round-robin fashion and use custom prefetch amount and queue for dealing with the source Flux's values.
- parallelism
the number of parallel rails
- prefetch
the number of values to prefetch from the source
- returns
a new SParallelFlux instance
-
final
def
parallel(parallelism: Int): SParallelFlux[T]
Prepare to consume this Flux on parallelism number of 'rails' in round-robin fashion.
Prepare to consume this Flux on parallelism number of 'rails' in round-robin fashion.
- parallelism
the number of parallel rails
- returns
a new SParallelFlux instance
-
final
def
parallel(): SParallelFlux[T]
Prepare to consume this Flux on number of 'rails' matching number of CPU in round-robin fashion.
Prepare to consume this Flux on number of 'rails' matching number of CPU in round-robin fashion.
- returns
a new SParallelFlux instance
-
def
parents: Stream[_ <: Scannable]
Return a Stream navigating the org.reactivestreams.Subscription chain (upward).
Return a Stream navigating the org.reactivestreams.Subscription chain (upward).
- returns
a Stream navigating the org.reactivestreams.Subscription chain (upward)
- Definition Classes
- Scannable
-
final
def
publish[R](transform: Function1[Flux[T], _ <: Publisher[_ <: R]], prefetch: Int): Flux[R]
Shares a sequence for the duration of a function that may transform it and consume it as many times as necessary without causing multiple subscriptions to the upstream.
Shares a sequence for the duration of a function that may transform it and consume it as many times as necessary without causing multiple subscriptions to the upstream.
- R
the output value type
- transform
the transformation function
- prefetch
the request size
- returns
a new Flux
-
final
def
publish[R](transform: Function1[Flux[T], _ <: Publisher[_ <: R]]): Flux[R]
Shares a sequence for the duration of a function that may transform it and consume it as many times as necessary without causing multiple subscriptions to the upstream.
Shares a sequence for the duration of a function that may transform it and consume it as many times as necessary without causing multiple subscriptions to the upstream.
- R
the output value type
- transform
the transformation function
- returns
a new Flux
-
final
def
publish(prefetch: Int): ConnectableFlux[T]
Prepare a ConnectableFlux which shares this Flux sequence and dispatches values to subscribers in a backpressure-aware manner.
Prepare a ConnectableFlux which shares this Flux sequence and dispatches values to subscribers in a backpressure-aware manner. This will effectively turn any type of sequence into a hot sequence.
Backpressure will be coordinated on Subscription.request and if any Subscriber is missing demand (requested = 0), multicast will pause pushing/pulling.
- prefetch
bounded requested demand
- returns
a new ConnectableFlux
-
final
def
publish(): ConnectableFlux[T]
Prepare a ConnectableFlux which shares this Flux sequence and dispatches values to subscribers in a backpressure-aware manner.
Prepare a ConnectableFlux which shares this Flux sequence and dispatches values to subscribers in a backpressure-aware manner. Prefetch will default to reactor.util.concurrent.Queues.SMALL_BUFFER_SIZE. This will effectively turn any type of sequence into a hot sequence.
Backpressure will be coordinated on Subscription.request and if any Subscriber is missing demand (requested = 0), multicast will pause pushing/pulling.
- returns
a new ConnectableFlux
-
final
def
publishNext(): Mono[T]
Prepare a Mono which shares this Flux sequence and dispatches the first observed item to subscribers in a backpressure-aware manner.
-
final
def
publishOn(scheduler: Scheduler, delayError: Boolean, prefetch: Int): Flux[T]
Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.
Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.
Typically used for fast publisher, slow consumer(s) scenarios.

flux.publishOn(Schedulers.single()).subscribe()- scheduler
a checked { @link reactor.core.scheduler.Scheduler.Worker} factory
- delayError
should the buffer be consumed before forwarding any error
- prefetch
the asynchronous boundary capacity
- returns
a Flux producing asynchronously
-
final
def
publishOn(scheduler: Scheduler, prefetch: Int): Flux[T]
Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.
Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.
Typically used for fast publisher, slow consumer(s) scenarios.

flux.publishOn(Schedulers.single()).subscribe()- scheduler
a checked reactor.core.scheduler.Scheduler.Worker factory
- prefetch
the asynchronous boundary capacity
- returns
a Flux producing asynchronously
-
final
def
publishOn(scheduler: Scheduler): Flux[T]
Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.
Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.
Typically used for fast publisher, slow consumer(s) scenarios.

flux.publishOn(Schedulers.single()).subscribe()- scheduler
a checked reactor.core.scheduler.Scheduler.Worker factory
- returns
a Flux producing asynchronously
-
final
def
reduce[A](initial: A, accumulator: (A, T) ⇒ A): Mono[A]
Accumulate the values from this Flux sequence into an object matching an initial value type.
Accumulate the values from this Flux sequence into an object matching an initial value type. The arguments are the N-1 or
initialvalue and N current item .
- A
the type of the initial and reduced object
- initial
the initial left argument to pass to the reducing
BiFunction- accumulator
the reducing
BiFunction- returns
a reduced Flux
-
final
def
reduce(aggregator: (T, T) ⇒ T): Mono[T]
Aggregate the values from this Flux sequence into an object of the same type than the emitted items.
-
final
def
reduceWith[A](initial: () ⇒ A, accumulator: (A, T) ⇒ A): Mono[A]
Accumulate the values from this Flux sequence into an object matching an initial value type.
Accumulate the values from this Flux sequence into an object matching an initial value type. The arguments are the N-1 or
initialvalue and N current item .
- A
the type of the initial and reduced object
- initial
the initial left argument supplied on subscription to the reducing
BiFunction- accumulator
the reducing
BiFunction- returns
a reduced Flux
-
final
def
repeat(numRepeat: Long, predicate: () ⇒ Boolean): Flux[T]
Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.
Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription. A specified maximum of repeat will limit the number of re-subscribe.
- numRepeat
the number of times to re-subscribe on complete
- predicate
the boolean to evaluate on onComplete
- returns
an eventually repeated Flux on onComplete up to number of repeat specified OR matching predicate
-
final
def
repeat(numRepeat: Long): Flux[T]
Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.
Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.
- numRepeat
the number of times to re-subscribe on onComplete
- returns
an eventually repeated Flux on onComplete up to number of repeat specified
-
final
def
repeat(predicate: () ⇒ Boolean): Flux[T]
Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.
Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.
- predicate
the boolean to evaluate on onComplete.
- returns
an eventually repeated Flux on onComplete
-
final
def
repeat(): Flux[T]
Repeatedly subscribe to the source completion of the previous subscription.
Repeatedly subscribe to the source completion of the previous subscription.
- returns
an indefinitely repeated Flux on onComplete
-
final
def
repeatWhen(whenFactory: Function1[Flux[Long], _ <: Publisher[_]]): Flux[T]
Repeatedly subscribe to this Flux when a companion sequence signals a number of emitted elements in response to the flux completion signal.
Repeatedly subscribe to this Flux when a companion sequence signals a number of emitted elements in response to the flux completion signal.
If the companion sequence signals when this Flux is active, the repeat attempt is suppressed and any terminal signal will terminate this Flux with the same signal immediately.
-
final
def
replay(history: Int, ttl: Duration): ConnectableFlux[T]
Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber.
Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber. Will retain up to the given history size onNext signals and given a per-item ttl. Completion and Error will also be replayed.
- history
number of events retained in history excluding complete and error
- ttl
Per-item timeout duration
- returns
a replaying ConnectableFlux
-
final
def
replay(ttl: Duration): ConnectableFlux[T]
Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber.
Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber. Will retain each onNext up to the given per-item expiry timeout. Completion and Error will also be replayed.
- ttl
Per-item timeout duration
- returns
a replaying ConnectableFlux
-
final
def
replay(history: Int): ConnectableFlux[T]
Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber.
Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber. Will retain up to the given history size onNext signals. Completion and Error will also be replayed.
- history
number of events retained in history excluding complete and error
- returns
a replaying ConnectableFlux
-
final
def
replay(): ConnectableFlux[T]
Turn this Flux into a hot source and cache last emitted signals for further Subscriber.
Turn this Flux into a hot source and cache last emitted signals for further Subscriber. Will retain an unbounded amount of onNext signals. Completion and Error will also be replayed.
- returns
a replaying ConnectableFlux
-
final
def
retry(numRetries: Long, retryMatcher: (Throwable) ⇒ Boolean): Flux[T]
Re-subscribes to this Flux sequence up to the specified number of retries if it signals any error and the given
Predicatematches otherwise push the error downstream.Re-subscribes to this Flux sequence up to the specified number of retries if it signals any error and the given
Predicatematches otherwise push the error downstream.
- numRetries
the number of times to tolerate an error
- retryMatcher
the predicate to evaluate if retry should occur based on a given error signal
- returns
a re-subscribing Flux on onError up to the specified number of retries and if the predicate matches.
-
final
def
retry(retryMatcher: (Throwable) ⇒ Boolean): Flux[T]
Re-subscribes to this Flux sequence if it signals any error and the given
Predicatematches otherwise push the error downstream. -
final
def
retry(numRetries: Long): Flux[T]
Re-subscribes to this Flux sequence if it signals any error either indefinitely or a fixed number of times.
-
final
def
retry(): Flux[T]
Re-subscribes to this Flux sequence if it signals any error either indefinitely.
-
final
def
retryWhen(whenFactory: (Flux[Throwable]) ⇒ Publisher[_]): Flux[T]
Retries this Flux when a companion sequence signals an item in response to this Flux error signal
-
final
def
sample[U](sampler: Publisher[U]): Flux[T]
Sample this Flux and emit its latest value whenever the sampler Publisher signals a value.
Sample this Flux and emit its latest value whenever the sampler Publisher signals a value.
Termination of either Publisher will result in termination for the Subscriber as well.
Both Publisher will run in unbounded mode because the backpressure would interfere with the sampling precision.
- U
the type of the sampler sequence
- sampler
the sampler Publisher
- returns
a sampled Flux by last item observed when the sampler Publisher signals
-
final
def
sample(timespan: Duration): Flux[T]
Emit latest value for every given period of time.
Emit latest value for every given period of time.
- timespan
the duration to emit the latest observed item
- returns
a sampled Flux by last item over a period of time
-
final
def
sampleFirst[U](samplerFactory: (T) ⇒ Publisher[U]): Flux[T]
Take a value from this Flux then use the duration provided by a generated Publisher to skip other values until that sampler Publisher signals.
Take a value from this Flux then use the duration provided by a generated Publisher to skip other values until that sampler Publisher signals.
- U
the companion reified type
- samplerFactory
select a Publisher companion to signal onNext or onComplete to stop excluding others values from this sequence
- returns
a sampled Flux by last item observed when the sampler signals
-
final
def
sampleFirst(timespan: Duration): Flux[T]
Take a value from this Flux then use the duration provided to skip other values.
-
final
def
sampleTimeout[U](throttlerFactory: (T) ⇒ Publisher[U], maxConcurrency: Int): Flux[T]
Emit the last value from this Flux only if there were no newer values emitted during the time window provided by a publisher for that particular last value.
Emit the last value from this Flux only if there were no newer values emitted during the time window provided by a publisher for that particular last value.
The provided
maxConcurrencywill keep a bounded maximum of concurrent timeouts and drop any new items until at least one timeout terminates.
- U
the throttling type
- throttlerFactory
select a Publisher companion to signal onNext or onComplete to stop checking others values from this sequence and emit the selecting item
- maxConcurrency
the maximum number of concurrent timeouts
- returns
a sampled Flux by last single item observed before a companion Publisher emits
-
final
def
sampleTimeout[U](throttlerFactory: (T) ⇒ Publisher[U]): Flux[T]
Emit the last value from this Flux only if there were no new values emitted during the time window provided by a publisher for that particular last value.
Emit the last value from this Flux only if there were no new values emitted during the time window provided by a publisher for that particular last value.
- U
the companion reified type
- throttlerFactory
select a Publisher companion to signal onNext or onComplete to stop checking others values from this sequence and emit the selecting item
- returns
a sampled Flux by last single item observed before a companion Publisher emits
-
final
def
scan[A](initial: A, accumulator: (A, T) ⇒ A): Flux[A]
Aggregate this Flux values with the help of an accumulator
BiFunctionand emits the intermediate results.Aggregate this Flux values with the help of an accumulator
BiFunctionand emits the intermediate results.The accumulation works as follows:
result[0] = initialValue; result[1] = accumulator(result[0], source[0]) result[2] = accumulator(result[1], source[1]) result[3] = accumulator(result[2], source[2]) ...
- A
the accumulated type
- initial
the initial argument to pass to the reduce function
- accumulator
the accumulating
BiFunction- returns
an accumulating Flux starting with initial state
-
final
def
scan(accumulator: (T, T) ⇒ T): Flux[T]
Accumulate this Flux values with an accumulator
BiFunctionand returns the intermediate results of this function.Accumulate this Flux values with an accumulator
BiFunctionand returns the intermediate results of this function.Unlike BiFunction), this operator doesn't take an initial value but treats the first Flux value as initial value.
The accumulation works as follows:result[0] = accumulator(source[0], source[1]) result[1] = accumulator(result[0], source[2]) result[2] = accumulator(result[1], source[3]) ...
- accumulator
the accumulating
BiFunction- returns
an accumulating Flux
-
def
scan[T](key: Attr[T]): Option[T]
Introspect a component's specific state attribute, returning an associated value specific to that component, or the default value associated with the key, or null if the attribute doesn't make sense for that particular component and has no sensible default.
Introspect a component's specific state attribute, returning an associated value specific to that component, or the default value associated with the key, or null if the attribute doesn't make sense for that particular component and has no sensible default.
- key
a Attr to resolve for the component.
- returns
a value associated to the key or None if unmatched or unresolved
- Definition Classes
- Scannable
-
def
scanOrDefault[T](key: Attr[T], defaultValue: T): T
Introspect a component's specific state attribute.
Introspect a component's specific state attribute. If there's no specific value in the component for that key, fall back to returning the provided non null default.
- key
a Attr to resolve for the component.
- defaultValue
a fallback value if key resolve to { @literal null}
- returns
a value associated to the key or the provided default if unmatched or unresolved
- Definition Classes
- Scannable
-
def
scanUnsafe(key: Attr[_]): Option[AnyRef]
This method is used internally by components to define their key-value mappings in a single place.
This method is used internally by components to define their key-value mappings in a single place. Although it is ignoring the generic type of the Attr key, implementors should take care to return values of the correct type, and return None if no specific value is available.
For public consumption of attributes, prefer using Scannable.scan(Attr), which will return a typed value and fall back to the key's default if the component didn't define any mapping.
- key
a { @link Attr} to resolve for the component.
- returns
the value associated to the key for that specific component, or null if none.
- Definition Classes
- Scannable
-
final
def
scanWith[A](initial: () ⇒ A, accumulator: (A, T) ⇒ A): Flux[A]
Aggregate this Flux values with the help of an accumulator
BiFunctionand emits the intermediate results.Aggregate this Flux values with the help of an accumulator
BiFunctionand emits the intermediate results.The accumulation works as follows:
result[0] = initialValue; result[1] = accumulator(result[0], source[0]) result[2] = accumulator(result[1], source[1]) result[3] = accumulator(result[2], source[2]) ...
- A
the accumulated type
- initial
the initial supplier to init the first value to pass to the reduce function
- accumulator
the accumulating
BiFunction- returns
an accumulating Flux starting with initial state
-
final
def
share(): Flux[T]
Returns a new Flux that multicasts (shares) the original Flux.
Returns a new Flux that multicasts (shares) the original Flux. As long as there is at least one Subscriber this Flux will be subscribed and emitting data. When all subscribers have cancelled it will cancel the source Flux.
This is an alias for Flux.publish.ConnectableFlux.refCount.
-
final
def
single(defaultValue: T): Mono[T]
Expect and emit a single item from this Flux source or signal NoSuchElementException (or a default value) for empty source, IndexOutOfBoundsException for a multi-item source.
Expect and emit a single item from this Flux source or signal NoSuchElementException (or a default value) for empty source, IndexOutOfBoundsException for a multi-item source.
- defaultValue
a single fallback item if this { @link Flux} is empty
- returns
a Mono with the eventual single item or a supplied default value
-
final
def
single(): Mono[T]
Expect and emit a single item from this Flux source or signal NoSuchElementException (or a default generated value) for empty source, IndexOutOfBoundsException for a multi-item source.
-
final
def
singleOrEmpty(): Mono[T]
Expect and emit a zero or single item from this Flux source or NoSuchElementException for a multi-item source.
-
final
def
skip(timespan: Duration, timer: Scheduler): Flux[T]
Skip elements from this Flux for the given time period.
-
final
def
skip(timespan: Duration): Flux[T]
Skip elements from this Flux for the given time period.
-
final
def
skip(skipped: Long): Flux[T]
Skip next the specified number of elements from this Flux.
-
final
def
skipLast(n: Int): Flux[T]
Skip the last specified number of elements from this Flux.
-
final
def
skipUntil(untilPredicate: (T) ⇒ Boolean): Flux[T]
Skips values from this Flux until a
Predicatereturns true for the value. -
final
def
skipUntilOther(other: Publisher[_]): Flux[T]
Skip values from this Flux until a specified Publisher signals an onNext or onComplete.
-
final
def
skipWhile(skipPredicate: (T) ⇒ Boolean): Flux[T]
Skips values from this Flux while a
Predicatereturns true for the value. -
final
def
sort(sortFunction: Ordering[T]): Flux[T]
Returns a Flux that sorts the events emitted by source Flux given the Ordering function.
-
final
def
sort(): Flux[T]
Returns a Flux that sorts the events emitted by source Flux.
Returns a Flux that sorts the events emitted by source Flux. Each item emitted by the Flux must implement Comparable with respect to all other items in the sequence.
Note that calling
sortwith long, non-terminating or infinite sources might cause OutOfMemoryError. Use sequence splitting like Flux.windowWhen to sort batches in that case.- returns
a sorting Flux
-
final
def
startWith(publisher: Publisher[_ <: T]): Flux[T]
Prepend the given Publisher sequence before this Flux sequence.
-
final
def
startWith(values: T*): Flux[T]
Prepend the given values before this Flux sequence.
-
final
def
startWith(iterable: Iterable[_ <: T]): Flux[T]
Prepend the given Iterable before this Flux sequence.
-
final
def
subscribe(consumer: (T) ⇒ Unit, errorConsumer: (Throwable) ⇒ Unit, completeConsumer: () ⇒ Unit, subscriptionConsumer: (Subscription) ⇒ Unit): Disposable
Subscribe
consumerto this Flux that will consume all the sequence.Subscribe
consumerto this Flux that will consume all the sequence. It will let the provided subscriptionConsumer request the adequate amount of data, or request unbounded demandLong.MAX_VALUEif no such consumer is provided.For a passive version that observe and forward incoming data see Flux.doOnNext, Flux.doOnError, Flux.doOnComplete and Flux.doOnSubscribe.
For a version that gives you more control over backpressure and the request, see Flux.subscribe with a reactor.core.publisher.BaseSubscriber.
- consumer
the consumer to invoke on each value
- errorConsumer
the consumer to invoke on error signal
- completeConsumer
the consumer to invoke on complete signal
- subscriptionConsumer
the consumer to invoke on subscribe signal, to be used for the initial request, or null for max request
- returns
a new Disposable to dispose the Subscription
-
final
def
subscribe(consumer: (T) ⇒ Unit, errorConsumer: (Throwable) ⇒ Unit, completeConsumer: () ⇒ Unit): Disposable
Subscribe
consumerto this Flux that will consume all the sequence.Subscribe
consumerto this Flux that will consume all the sequence. It will request unbounded demandLong.MAX_VALUE. For a passive version that observe and forward incoming data see Flux.doOnNext, Flux.doOnError and Flux.doOnComplete.For a version that gives you more control over backpressure and the request, see Flux.subscribe with a reactor.core.publisher.BaseSubscriber.
- consumer
the consumer to invoke on each value
- errorConsumer
the consumer to invoke on error signal
- completeConsumer
the consumer to invoke on complete signal
- returns
a new Disposable to dispose the Subscription
-
final
def
subscribe(consumer: (T) ⇒ Unit, errorConsumer: (Throwable) ⇒ Unit): Disposable
Subscribe
consumerto this Flux that will consume all the sequence.Subscribe
consumerto this Flux that will consume all the sequence. It will request unbounded demandLong.MAX_VALUE. For a passive version that observe and forward incoming data see Flux.doOnNext and Flux.doOnError.For a version that gives you more control over backpressure and the request, see Flux.subscribe with a reactor.core.publisher.BaseSubscriber.
- consumer
the consumer to invoke on each next signal
- errorConsumer
the consumer to invoke on error signal
- returns
a new Disposable to dispose the Subscription
-
final
def
subscribe(consumer: (T) ⇒ Unit): Disposable
Subscribe a
consumerto this Flux that will consume all the sequence.Subscribe a
consumerto this Flux that will consume all the sequence. It will request an unbounded demand.For a passive version that observe and forward incoming data see Flux.doOnNext.
For a version that gives you more control over backpressure and the request, see Flux.subscribe with a reactor.core.publisher.BaseSubscriber.
- consumer
the consumer to invoke on each value
- returns
a new Disposable to dispose the Subscription
-
final
def
subscribe(): Disposable
Start the chain and request unbounded demand.
Start the chain and request unbounded demand.

- returns
a Disposable task to execute to dispose and cancel the underlying Subscription
-
def
subscribe(s: Subscriber[_ >: T]): Unit
- Definition Classes
- Flux → Publisher
-
final
def
subscribeOn(scheduler: Scheduler): Flux[T]
Run subscribe, onSubscribe and request on a supplied Subscriber
Run subscribe, onSubscribe and request on a supplied Subscriber

Typically used for slow publisher e.g., blocking IO, fast consumer(s) scenarios.
flux.subscribeOn(Schedulers.single()).subscribe()- scheduler
a checked reactor.core.scheduler.Scheduler.Worker factory
- returns
a Flux requesting asynchronously
-
final
def
subscribeWith[E <: Subscriber[T]](subscriber: E): E
A chaining Publisher.subscribe alternative to inline composition type conversion to a hot emitter (e.g.
A chaining Publisher.subscribe alternative to inline composition type conversion to a hot emitter (e.g. reactor.core.publisher.FluxProcessor or reactor.core.publisher.MonoProcessor).
flux.subscribeWith(WorkQueueProcessor.create()).subscribe()If you need more control over backpressure and the request, use a reactor.core.publisher.BaseSubscriber.
- E
the reified type from the input/output subscriber
- subscriber
the Subscriber to subscribe and return
- returns
the passed Subscriber
-
final
def
subscriberContext(doOnContext: (Context) ⇒ Context): Flux[T]
Enrich a potentially empty downstream Context by applying a Function1 to it, producing a new Context that is propagated upstream.
Enrich a potentially empty downstream Context by applying a Function1 to it, producing a new Context that is propagated upstream.
The Context propagation happens once per subscription (not on each onNext): it is done during the
subscribe(Subscriber)phase, which runs from the last operator of a chain towards the first.So this operator enriches a Context coming from under it in the chain (downstream, by default an empty one) and passes the new enriched Context to operators above it in the chain (upstream, by way of them using
Flux#subscribe(Subscriber,Context)).- doOnContext
the function taking a previous Context state and returning a new one.
- returns
a contextualized Flux
- See also
Context
-
final
def
subscriberContext(mergeContext: Context): Flux[T]
Enrich a potentially empty downstream Context by adding all values from the given Context, producing a new Context that is propagated upstream.
Enrich a potentially empty downstream Context by adding all values from the given Context, producing a new Context that is propagated upstream.
The Context propagation happens once per subscription (not on each onNext): it is done during the
subscribe(Subscriber)phase, which runs from the last operator of a chain towards the first.So this operator enriches a Context coming from under it in the chain (downstream, by default an empty one) and passes the new enriched Context to operators above it in the chain (upstream, by way of them using
Flux#subscribe(Subscriber,Context)).- mergeContext
the Context to merge with a previous Context state, returning a new one.
- returns
a contextualized { @link Flux}
- See also
Context
-
final
def
switchIfEmpty(alternate: Publisher[_ <: T]): Flux[T]
Provide an alternative if this sequence is completed without any data
Provide an alternative if this sequence is completed without any data

- alternate
the alternate publisher if this sequence is empty
- returns
an alternating Flux on source onComplete without elements
-
final
def
switchMap[V](fn: (T) ⇒ Publisher[_ <: V], prefetch: Int): Flux[V]
Switch to a new Publisher generated via a
Functionwhenever this Flux produces an item. -
final
def
switchMap[V](fn: (T) ⇒ Publisher[_ <: V]): Flux[V]
Switch to a new Publisher generated via a
Functionwhenever this Flux produces an item. -
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
final
def
tag(key: String, value: String): Flux[T]
Tag this flux with a key/value pair.
Tag this flux with a key/value pair. These can be retrieved as a Stream of all tags throughout the publisher chain by using reactor.core.scala.Scannable.tags() (as traversed by reactor.core.scala.Scannable.parents()).
- key
a tag key
- value
a tag value
- returns
the same sequence, but bearing tags
-
def
tags: Stream[(String, String)]
Visit this Scannable and its Scannable.parents() and stream all the observed tags
-
final
def
take(timespan: Duration, timer: Scheduler): Flux[T]
Relay values from this Flux until the given time period elapses.
-
final
def
take(timespan: Duration): Flux[T]
Relay values from this Flux until the given time period elapses.
-
final
def
take(n: Long): Flux[T]
Take only the first N values from this Flux.
-
final
def
takeLast(n: Int): Flux[T]
Emit the last N values this Flux emitted before its completion.
-
final
def
takeUntil(predicate: (T) ⇒ Boolean): Flux[T]
Relay values from this Flux until the given
Predicatematches.Relay values from this Flux until the given
Predicatematches. Unlike Flux.takeWhile, this will include the matched data.
-
final
def
takeUntilOther(other: Publisher[_]): Flux[T]
Relay values from this Flux until the given Publisher emits.
-
final
def
takeWhile(continuePredicate: (T) ⇒ Boolean): Flux[T]
Relay values while a predicate returns
Truefor the values (checked before each value is delivered).Relay values while a predicate returns
Truefor the values (checked before each value is delivered). Unlike Flux.takeUntil, this will exclude the matched data.
- continuePredicate
the
Predicateinvoked each onNext returningFalseto terminate- returns
an eventually limited Flux
-
final
def
then(): Mono[Unit]
Return a
Mono[Unit]that completes when this Flux completes. -
final
def
thenEmpty(other: Publisher[Unit]): Mono[Unit]
Return a
Mono[Unit]that waits for this Flux to complete then for a supplied Publisher[Unit] to also complete.Return a
Mono[Unit]that waits for this Flux to complete then for a supplied Publisher[Unit] to also complete. The second completion signal is replayed, or any error signal that occurs instead.
- other
a Publisher to wait for after this Flux's termination
- returns
a new Mono completing when both publishers have completed in sequence
-
final
def
thenMany[V](other: Publisher[V]): Flux[V]
Return a Flux that emits the sequence of the supplied Publisher after this Flux completes, ignoring this flux elements.
Return a Flux that emits the sequence of the supplied Publisher after this Flux completes, ignoring this flux elements. If an error occurs it immediately terminates the resulting flux.
- V
the supplied produced type
- other
a Publisher to emit from after termination
- returns
a new Flux emitting eventually from the supplied Publisher
-
final
def
timeout[U, V](firstTimeout: Publisher[U], nextTimeoutFactory: (T) ⇒ Publisher[V], fallback: Publisher[_ <: T]): Flux[T]
Switch to a fallback Publisher in case a first item from this Flux has not been emitted before the given Publisher emits.
Switch to a fallback Publisher in case a first item from this Flux has not been emitted before the given Publisher emits. The following items will be individually timed via the factory provided Publisher.
- U
the type of the elements of the first timeout Publisher
- V
the type of the elements of the subsequent timeout Publishers
- firstTimeout
the timeout Publisher that must not emit before the first signal from this { @link Flux}
- nextTimeoutFactory
the timeout Publisher factory for each next item
- fallback
the fallback Publisher to subscribe when a timeout occurs
- returns
a first then per-item expirable Flux with a fallback Publisher
-
final
def
timeout[U, V](firstTimeout: Publisher[U], nextTimeoutFactory: (T) ⇒ Publisher[V]): Flux[T]
Signal a java.util.concurrent.TimeoutException in case a first item from this Flux has not been emitted before the given Publisher emits.
Signal a java.util.concurrent.TimeoutException in case a first item from this Flux has not been emitted before the given Publisher emits. The following items will be individually timed via the factory provided Publisher.
- U
the type of the elements of the first timeout Publisher
- V
the type of the elements of the subsequent timeout Publishers
- firstTimeout
the timeout Publisher that must not emit before the first signal from this Flux
- nextTimeoutFactory
the timeout Publisher factory for each next item
- returns
a first then per-item expirable Flux
-
final
def
timeout[U](firstTimeout: Publisher[U]): Flux[T]
Signal a java.util.concurrent.TimeoutException in case a first item from this Flux has not been emitted before the given Publisher emits.
Signal a java.util.concurrent.TimeoutException in case a first item from this Flux has not been emitted before the given Publisher emits.
-
final
def
timeout(timeout: Duration, fallback: Option[Publisher[_ <: T]]): Flux[T]
Switch to a fallback Publisher in case a per-item period fires before the next item arrives from this Flux.
Switch to a fallback Publisher in case a per-item period fires before the next item arrives from this Flux.
If the given Publisher is None, signal a java.util.concurrent.TimeoutException.
- timeout
the timeout between two signals from this { @link Flux}
- fallback
the optional fallback Publisher to subscribe when a timeout occurs
- returns
a per-item expirable Flux with a fallback Publisher
-
final
def
timeout(timeout: Duration): Flux[T]
Signal a java.util.concurrent.TimeoutException in case a per-item period fires before the next item arrives from this Flux.
-
final
def
timestamp(scheduler: Scheduler): Flux[(Long, T)]
Emit a Tuple2 pair of T1 Long current system time in millis and T2
Tassociated data for each item from this Flux -
final
def
timestamp(): Flux[(Long, T)]
Emit a Tuple2 pair of T1 Long current system time in millis and T2
Tassociated data for each item from this Flux -
final
def
toIterable(batchSize: Int, queueProvider: Option[Supplier[Queue[T]]]): Iterable[T]
Transform this Flux into a lazy Iterable blocking on next calls.
Transform this Flux into a lazy Iterable blocking on next calls.
- batchSize
the bounded capacity to produce to this Flux or
Int.MaxValuefor unbounded- queueProvider
the optional supplier of the queue implementation to be used for transferring elements across threads. The supplier of queue can easily be obtained using reactor.util.concurrent.QueueSupplier.get
- returns
a blocking Iterable
-
final
def
toIterable(batchSize: Int): Iterable[T]
Transform this Flux into a lazy Iterable blocking on next calls.
-
final
def
toIterable(): Iterable[T]
Transform this Flux into a lazy Iterable blocking on next calls.
Transform this Flux into a lazy Iterable blocking on next calls.

- returns
a blocking Iterable
-
final
def
toStream(batchSize: Int): Stream[T]
Transform this Flux into a lazy Stream blocking on next calls.
-
final
def
toStream(): Stream[T]
Transform this Flux into a lazy Stream blocking on next calls.
Transform this Flux into a lazy Stream blocking on next calls.
- returns
a of unknown size with onClose attached to Subscription.cancel
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
final
def
transform[V](transformer: (Flux[T]) ⇒ Publisher[V]): Flux[V]
Transform this Flux in order to generate a target Flux.
Transform this Flux in order to generate a target Flux. Unlike Flux.compose, the provided function is executed as part of assembly.
- V
the item type in the returned Flux
- transformer
the Function1 to immediately map this Flux into a target Flux instance.
- returns
a new Flux
val applySchedulers = flux => flux.subscribeOn(Schedulers.elastic()).publishOn(Schedulers.parallel()); flux.transform(applySchedulers).map(v => v * v).subscribe()
- See also
Flux.compose for deferred composition of Flux for each Subscriber
Flux.as for a loose conversion to an arbitrary type
Example: -
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
final
def
window(timespan: Duration, timeshift: Duration, timer: Scheduler): Flux[Flux[T]]
Split this Flux sequence into multiple Flux delimited by the given
timeshiftperiod, starting from the first item.Split this Flux sequence into multiple Flux delimited by the given
timeshiftperiod, starting from the first item. Each Flux bucket will onComplete aftertimespanperiod has elpased.When timeshift > timespan : dropping windows

When timeshift < timespan : overlapping windows

When timeshift == timespan : exact windows
-
final
def
window(timespan: Duration, timer: Scheduler): Flux[Flux[T]]
Split this Flux sequence into continuous, non-overlapping windows delimited by a given period.
-
final
def
window(timespan: Duration, timeshift: Duration): Flux[Flux[T]]
Split this Flux sequence into multiple Flux delimited by the given
timeshiftperiod, starting from the first item.Split this Flux sequence into multiple Flux delimited by the given
timeshiftperiod, starting from the first item. Each Flux bucket will onComplete aftertimespanperiod has elpased.When timeshift > timespan : dropping windows

When timeshift < timespan : overlapping windows

When timeshift == timespan : exact windows
-
final
def
window(timespan: Duration): Flux[Flux[T]]
Split this Flux sequence into continuous, non-overlapping windows delimited by a given period.
-
final
def
window(boundary: Publisher[_]): Flux[Flux[T]]
Split this Flux sequence into continuous, non-overlapping windows where the window boundary is signalled by another Publisher
-
final
def
window(maxSize: Int, skip: Int): Flux[Flux[T]]
Split this Flux sequence into multiple Flux delimited by the given
skipcount, starting from the first item. -
final
def
window(maxSize: Int): Flux[Flux[T]]
Split this Flux sequence into multiple Flux delimited by the given
maxSizecount and starting from the first item. -
final
def
windowTimeout(maxSize: Int, timespan: Duration, timer: Scheduler): Flux[Flux[T]]
Split this Flux sequence into multiple Flux delimited by the given
maxSizenumber of items, starting from the first item. -
final
def
windowTimeout(maxSize: Int, timespan: Duration): Flux[Flux[T]]
Split this Flux sequence into multiple Flux delimited by the given
maxSizenumber of items, starting from the first item. -
final
def
windowUntil(boundaryTrigger: (T) ⇒ Boolean, cutBefore: Boolean, prefetch: Int): Flux[Flux[T]]
Split this Flux sequence into multiple Flux windows delimited by the given predicate and using a prefetch.
Split this Flux sequence into multiple Flux windows delimited by the given predicate and using a prefetch. A new window is opened each time the predicate returns true.
If
cutBeforeis true, the old window will onComplete and the triggering element will be emitted in the new window. Note it can mean that an empty window is sometimes emitted, eg. if the first element in the sequence immediately matches the predicate.
Otherwise, the triggering element will be emitted in the old window before it does onComplete, similar to Flux.windowUntil(Predicate).
-
final
def
windowUntil(boundaryTrigger: (T) ⇒ Boolean, cutBefore: Boolean): Flux[Flux[T]]
Split this Flux sequence into multiple Flux windows delimited by the given predicate.
Split this Flux sequence into multiple Flux windows delimited by the given predicate. A new window is opened each time the predicate returns true.
If
cutBeforeis true, the old window will onComplete and the triggering element will be emitted in the new window. Note it can mean that an empty window is sometimes emitted, eg. if the first element in the sequence immediately matches the predicate.
Otherwise, the triggering element will be emitted in the old window before it does onComplete, similar to Flux.windowUntil(Predicate).
-
final
def
windowUntil(boundaryTrigger: (T) ⇒ Boolean): Flux[Flux[T]]
Split this Flux sequence into multiple Flux windows delimited by the given predicate.
-
final
def
windowWhen[U, V](bucketOpening: Publisher[U], closeSelector: (U) ⇒ Publisher[V]): Flux[Flux[T]]
Split this Flux sequence into potentially overlapping windows controlled by items of a start Publisher and end Publisher derived from the start values.
Split this Flux sequence into potentially overlapping windows controlled by items of a start Publisher and end Publisher derived from the start values.
When Open signal is strictly not overlapping Close signal : dropping windows

When Open signal is strictly more frequent than Close signal : overlapping windows

When Open signal is exactly coordinated with Close signal : exact windows
- U
the type of the sequence opening windows
- V
the type of the sequence closing windows opened by the bucketOpening Publisher's elements
- bucketOpening
a Publisher to emit any item for a split signal and complete to terminate
- closeSelector
a
Functiongiven an opening signal and returning a Publisher that emits to complete the window- returns
a windowing Flux delimiting its sub-sequences by a given Publisher and lasting until a selected Publisher emits
-
final
def
windowWhile(inclusionPredicate: (T) ⇒ Boolean, prefetch: Int): Flux[Flux[T]]
Split this Flux sequence into multiple Flux windows that stay open while a given predicate matches the source elements.
Split this Flux sequence into multiple Flux windows that stay open while a given predicate matches the source elements. Once the predicate returns false, the window closes with an onComplete and the triggering element is discarded.
Note that for a sequence starting with a separator, or having several subsequent separators anywhere in the sequence, each occurrence will lead to an empty window.
-
final
def
windowWhile(inclusionPredicate: (T) ⇒ Boolean): Flux[Flux[T]]
Split this Flux sequence into multiple Flux windows that stay open while a given predicate matches the source elements.
Split this Flux sequence into multiple Flux windows that stay open while a given predicate matches the source elements. Once the predicate returns false, the window closes with an onComplete and the triggering element is discarded.
Note that for a sequence starting with a separator, or having several subsequent separators anywhere in the sequence, each occurrence will lead to an empty window.
-
final
def
withLatestFrom[U, R](other: Publisher[_ <: U], resultSelector: Function2[T, U, _ <: R]): Flux[R]
Combine values from this Flux with values from another Publisher through a
BiFunctionand emits the result.Combine values from this Flux with values from another Publisher through a
BiFunctionand emits the result.The operator will drop values from this Flux until the other Publisher produces any value.
If the other Publisher completes without any value, the sequence is completed.
- U
the other Publisher sequence type
- R
the result type
- other
the Publisher to combine with
- resultSelector
the bi-function called with each pair of source and other elements that should return a single value to be emitted
- returns
a combined Flux gated by another Publisher
-
final
def
zipWith[T2](source2: Publisher[_ <: T2], prefetch: Int): Flux[(T, T2)]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

-
final
def
zipWith[T2, V](source2: Publisher[_ <: T2], prefetch: Int, combinator: (T, T2) ⇒ V): Flux[V]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations produced by the passed combinator from the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- T2
type of the value from source2
- V
The produced output after transformation by the combinator
- source2
The second upstream Publisher to subscribe to.
- prefetch
the request size to use for this Flux and the other Publisher
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a zipped Flux
-
final
def
zipWith[T2, V](source2: Publisher[_ <: T2], combinator: (T, T2) ⇒ V): Flux[V]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations produced by the passed combinator from the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- T2
type of the value from source2
- V
The produced output after transformation by the combinator
- source2
The second upstream Publisher to subscribe to.
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a zipped Flux
-
final
def
zipWith[T2](source2: Publisher[_ <: T2]): Flux[(T, T2)]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- T2
type of the value from source2
- source2
The second upstream Publisher to subscribe to.
- returns
a zipped Flux
-
final
def
zipWithIterable[T2, V](iterable: Iterable[_ <: T2], zipper: Function2[T, T2, _ <: V]): Flux[V]
Pairwise combines elements of this Flux and an Iterable sequence using the given zipper
BiFunction. -
final
def
zipWithIterable[T2](iterable: Iterable[_ <: T2]): Flux[(T, T2)]
Pairwise combines as
Tuple2elements of this Flux and an Iterable sequence. - final def zipWithTimeSinceSubscribe(): Flux[(T, Long)]









