Packages

trait FluxProcessor[IN, OUT] extends Flux[OUT] with Processor[IN, OUT] with Disposable with Scannable

A base processor that exposes Flux API for org.reactivestreams.Processor.

Implementors include reactor.core.publisher.UnicastProcessor, reactor.core.publisher.EmitterProcessor, reactor.core.publisher.ReplayProcessor, reactor.core.publisher.WorkQueueProcessor and reactor.core.publisher.TopicProcessor.

IN

the input value type

OUT

the output value type

Linear Supertypes
Disposable, Processor[IN, OUT], Subscriber[IN], Flux[OUT], Scannable, Filter[OUT], FluxLike[OUT], OnErrorReturn[OUT], MapablePublisher[OUT], Publisher[OUT], AnyRef, Any
Known Subclasses
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. FluxProcessor
  2. Disposable
  3. Processor
  4. Subscriber
  5. Flux
  6. Scannable
  7. Filter
  8. FluxLike
  9. OnErrorReturn
  10. MapablePublisher
  11. Publisher
  12. AnyRef
  13. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Abstract Value Members

  1. abstract def jFluxProcessor: publisher.FluxProcessor[IN, OUT]
    Attributes
    protected

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def actuals(): Stream[_ <: Scannable]
    Definition Classes
    Scannable
  5. final def all(predicate: (OUT) ⇒ Boolean): Mono[Boolean]

    Emit a single boolean true if all values of this sequence match the given predicate.

    Emit a single boolean true if all values of this sequence match the given predicate.

    The implementation uses short-circuit logic and completes with false if the predicate doesn't match a value.

    predicate

    the predicate to match all emitted items

    returns

    a Mono of all evaluations

    Definition Classes
    Flux
  6. final def any(predicate: (OUT) ⇒ Boolean): Mono[Boolean]

    Emit a single boolean true if any of the values of this Flux sequence match the predicate.

    Emit a single boolean true if any of the values of this Flux sequence match the predicate.

    The implementation uses short-circuit logic and completes with true if the predicate matches a value.

    predicate

    predicate tested upon values

    returns

    a new Flux with true if any value satisfies a predicate and false otherwise

    Definition Classes
    Flux
  7. final def as[P](transformer: (Flux[OUT]) ⇒ P): P

    Immediately apply the given transformation to this Flux in order to generate a target type.

    Immediately apply the given transformation to this Flux in order to generate a target type.

    flux.as(Mono::from).subscribe()

    P

    the returned type

    transformer

    the Function1 to immediately map this Flux into a target type instance.

    returns

    a an instance of P

    Definition Classes
    Flux
    See also

    Flux.compose for a bounded conversion to Publisher

  8. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  9. final def asJava(): publisher.Flux[OUT]
    Definition Classes
    Flux
  10. final def blockFirst(d: Duration): Option[OUT]

    Blocks until the upstream signals its first value or completes.

    Blocks until the upstream signals its first value or completes.

    d

    max duration timeout to wait for.

    returns

    the Some value or None

    Definition Classes
    Flux
  11. final def blockFirst(): Option[OUT]

    Blocks until the upstream signals its first value or completes.

    Blocks until the upstream signals its first value or completes.

    returns

    the Some value or None

    Definition Classes
    Flux
  12. final def blockLast(d: Duration): Option[OUT]

    Blocks until the upstream completes and return the last emitted value.

    Blocks until the upstream completes and return the last emitted value.

    d

    max duration timeout to wait for.

    returns

    the last value or None

    Definition Classes
    Flux
  13. final def blockLast(): Option[OUT]

    Blocks until the upstream completes and return the last emitted value.

    Blocks until the upstream completes and return the last emitted value.

    returns

    the last value or None

    Definition Classes
    Flux
  14. final def buffer(timespan: Duration, timeshift: Duration): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq delimited by the given timeshift period.

    Collect incoming values into multiple Seq delimited by the given timeshift period. Each Seq bucket will last until the timespan has elapsed, thus releasing the bucket to the returned Flux.

    When timeshift > timespan : dropping buffers

    When timeshift < timespan : overlapping buffers

    When timeshift == timespan : exact buffers

    timespan

    the duration to use to release buffered lists

    timeshift

    the duration to use to create a new bucket

    returns

    a microbatched Flux of Seq delimited by the given period timeshift and sized by timespan

    Definition Classes
    Flux
  15. final def buffer(timespan: Duration): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq that will be pushed into the returned Flux every timespan.

    Collect incoming values into multiple Seq that will be pushed into the returned Flux every timespan.

    timespan

    the duration to use to release a buffered list

    returns

    a microbatched Flux of Seq delimited by the given period

    Definition Classes
    Flux
  16. final def buffer[C <: ListBuffer[OUT]](other: Publisher[_], bufferSupplier: () ⇒ C): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq delimited by the given Publisher signals.

    Collect incoming values into multiple Seq delimited by the given Publisher signals.

    C

    the supplied Seq type

    other

    the other Publisher to subscribe to for emitting and recycling receiving bucket

    bufferSupplier

    the collection to use for each data segment

    returns

    a microbatched Flux of Seq delimited by a Publisher

    Definition Classes
    Flux
  17. final def buffer(other: Publisher[_]): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq delimited by the given Publisher signals.

    Collect incoming values into multiple Seq delimited by the given Publisher signals.

    other

    the other Publisher to subscribe to for emiting and recycling receiving bucket

    returns

    a microbatched Flux of Seq delimited by a Publisher

    Definition Classes
    Flux
  18. final def buffer[C <: ListBuffer[OUT]](maxSize: Int, skip: Int, bufferSupplier: () ⇒ C): Flux[Seq[OUT]]

    Collect incoming values into multiple mutable.Seq that will be pushed into the returned Flux when the given max size is reached or onComplete is received.

    Collect incoming values into multiple mutable.Seq that will be pushed into the returned Flux when the given max size is reached or onComplete is received. A new container mutable.Seq will be created every given skip count.

    When Skip > Max Size : dropping buffers

    When Skip < Max Size : overlapping buffers

    When Skip == Max Size : exact buffers

    C

    the supplied mutable.Seq type

    maxSize

    the max collected size

    skip

    the number of items to skip before creating a new bucket

    bufferSupplier

    the collection to use for each data segment

    returns

    a microbatched Flux of possibly overlapped or gapped mutable.Seq

    Definition Classes
    Flux
  19. final def buffer(maxSize: Int, skip: Int): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq that will be pushed into the returned Flux when the given max size is reached or onComplete is received.

    Collect incoming values into multiple Seq that will be pushed into the returned Flux when the given max size is reached or onComplete is received. A new container Seq will be created every given skip count.

    When Skip > Max Size : dropping buffers

    When Skip < Max Size : overlapping buffers

    When Skip == Max Size : exact buffers

    maxSize

    the max collected size

    skip

    the number of items to skip before creating a new bucket

    returns

    a microbatched Flux of possibly overlapped or gapped Seq

    Definition Classes
    Flux
  20. final def buffer[C <: ListBuffer[OUT]](maxSize: Int, bufferSupplier: () ⇒ C): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq buckets that will be pushed into the returned Flux when the given max size is reached or onComplete is received.

    Collect incoming values into multiple Seq buckets that will be pushed into the returned Flux when the given max size is reached or onComplete is received.

    C

    the supplied Seq type

    maxSize

    the maximum collected size

    bufferSupplier

    the collection to use for each data segment

    returns

    a microbatched Flux of Seq

    Definition Classes
    Flux
  21. final def buffer(maxSize: Int): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq buckets that will be pushed into the returned Flux when the given max size is reached or onComplete is received.

    Collect incoming values into multiple Seq buckets that will be pushed into the returned Flux when the given max size is reached or onComplete is received.

    maxSize

    the maximum collected size

    returns

    a microbatched Flux of Seq

    Definition Classes
    Flux
  22. final def buffer(): Flux[Seq[OUT]]

    Collect incoming values into a Seq that will be pushed into the returned Flux on complete only.

    Collect incoming values into a Seq that will be pushed into the returned Flux on complete only.

    returns

    a buffered Flux of at most one Seq

    Definition Classes
    Flux
    See also

    #collectList() for an alternative collecting algorithm returning Mono

  23. def bufferSize(): Int

    Return the processor buffer capacity if any or Int.MaxValue

    Return the processor buffer capacity if any or Int.MaxValue

    returns

    processor buffer capacity if any or Int.MaxValue

  24. final def bufferTimeout[C <: ListBuffer[OUT]](maxSize: Int, timespan: Duration, bufferSupplier: () ⇒ C): Flux[Seq[OUT]]

    Collect incoming values into a Seq that will be pushed into the returned Flux every timespan OR maxSize items.

    Collect incoming values into a Seq that will be pushed into the returned Flux every timespan OR maxSize items.

    C

    the supplied Seq type

    maxSize

    the max collected size

    timespan

    the timeout to use to release a buffered list

    bufferSupplier

    the collection to use for each data segment

    returns

    a microbatched Flux of Seq delimited by given size or a given period timeout

    Definition Classes
    Flux
  25. final def bufferTimeout(maxSize: Int, timespan: Duration): Flux[Seq[OUT]]

    Collect incoming values into a Seq that will be pushed into the returned Flux every timespan OR maxSize items.

    Collect incoming values into a Seq that will be pushed into the returned Flux every timespan OR maxSize items.

    maxSize

    the max collected size

    timespan

    the timeout to use to release a buffered list

    returns

    a microbatched Flux of Seq delimited by given size or a given period timeout

    Definition Classes
    Flux
  26. final def bufferUntil(predicate: (OUT) ⇒ Boolean, cutBefore: Boolean): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq that will be pushed into the returned Flux each time the given predicate returns true.

    Collect incoming values into multiple Seq that will be pushed into the returned Flux each time the given predicate returns true. Note that the buffer into which the element that triggers the predicate to return true (and thus closes a buffer) is included depends on the cutBefore parameter: set it to true to include the boundary element in the newly opened buffer, false to include it in the closed buffer (as in Flux.bufferUntil).

    On completion, if the latest buffer is non-empty and has not been closed it is emitted. However, such a "partial" buffer isn't emitted in case of onError termination.

    predicate

    a predicate that triggers the next buffer when it becomes true.

    cutBefore

    set to true to include the triggering element in the new buffer rather than the old.

    returns

    a microbatched Flux of Seq

    Definition Classes
    Flux
  27. final def bufferUntil(predicate: (OUT) ⇒ Boolean): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq that will be pushed into the returned Flux each time the given predicate returns true.

    Collect incoming values into multiple Seq that will be pushed into the returned Flux each time the given predicate returns true. Note that the element that triggers the predicate to return true (and thus closes a buffer) is included as last element in the emitted buffer.

    On completion, if the latest buffer is non-empty and has not been closed it is emitted. However, such a "partial" buffer isn't emitted in case of onError termination.

    predicate

    a predicate that triggers the next buffer when it becomes true.

    returns

    a microbatched Flux of Seq

    Definition Classes
    Flux
  28. final def bufferWhen[U, V, C <: ListBuffer[OUT]](bucketOpening: Publisher[U], closeSelector: (U) ⇒ Publisher[V], bufferSupplier: () ⇒ C): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq delimited by the given Publisher signals.

    Collect incoming values into multiple Seq delimited by the given Publisher signals. Each Seq bucket will last until the mapped Publisher receiving the boundary signal emits, thus releasing the bucket to the returned Flux.

    When Open signal is strictly not overlapping Close signal : dropping buffers

    When Open signal is strictly more frequent than Close signal : overlapping buffers

    When Open signal is exactly coordinated with Close signal : exact buffers

    U

    the element type of the bucket-opening sequence

    V

    the element type of the bucket-closing sequence

    C

    the supplied Seq type

    bucketOpening

    a Publisher to subscribe to for creating new receiving bucket signals.

    closeSelector

    a Publisher factory provided the opening signal and returning a Publisher to subscribe to for emitting relative bucket.

    bufferSupplier

    the collection to use for each data segment

    returns

    a microbatched Flux of Seq delimited by an opening Publisher and a relative closing Publisher

    Definition Classes
    Flux
  29. final def bufferWhen[U, V](bucketOpening: Publisher[U], closeSelector: (U) ⇒ Publisher[V]): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq delimited by the given Publisher signals.

    Collect incoming values into multiple Seq delimited by the given Publisher signals. Each Seq bucket will last until the mapped Publisher receiving the boundary signal emits, thus releasing the bucket to the returned Flux.

    When Open signal is strictly not overlapping Close signal : dropping buffers

    When Open signal is strictly more frequent than Close signal : overlapping buffers

    When Open signal is exactly coordinated with Close signal : exact buffers

    U

    the element type of the bucket-opening sequence

    V

    the element type of the bucket-closing sequence

    bucketOpening

    a Publisher to subscribe to for creating new receiving bucket signals.

    closeSelector

    a Publisher factory provided the opening signal and returning a Publisher to subscribe to for emitting relative bucket.

    returns

    a microbatched Flux of Seq delimited by an opening Publisher and a relative closing Publisher

    Definition Classes
    Flux
  30. final def bufferWhile(predicate: (OUT) ⇒ Boolean): Flux[Seq[OUT]]

    Collect incoming values into multiple Seq that will be pushed into the returned Flux.

    Collect incoming values into multiple Seq that will be pushed into the returned Flux. Each buffer continues aggregating values while the given predicate returns true, and a new buffer is created as soon as the predicate returns false... Note that the element that triggers the predicate to return false (and thus closes a buffer) is NOT included in any emitted buffer.

    On completion, if the latest buffer is non-empty and has not been closed it is emitted. However, such a "partial" buffer isn't emitted in case of onError termination.

    predicate

    a predicate that triggers the next buffer when it becomes false.

    returns

    a microbatched Flux of Seq

    Definition Classes
    Flux
  31. final def cache(history: Int, ttl: Duration): Flux[OUT]

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber.

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber. Will retain up to the given history size with per-item expiry timeout.

    history

    number of events retained in history excluding complete and error

    ttl

    Time-to-live for each cached item.

    returns

    a replaying Flux

    Definition Classes
    Flux
  32. final def cache(ttl: Duration): Flux[OUT]

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber.

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber. Will retain an unbounded history with per-item expiry timeout Completion and Error will also be replayed.

    ttl

    Time-to-live for each cached item.

    returns

    a replaying Flux

    Definition Classes
    Flux
  33. final def cache(history: Int): Flux[OUT]

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber.

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber. Will retain up to the given history size onNext signals. Completion and Error will also be replayed.

    history

    number of events retained in history excluding complete and error

    returns

    a replaying Flux

    Definition Classes
    Flux
  34. final def cache(): Flux[OUT]

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber.

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber. Will retain up an unbounded volume of onNext signals. Completion and Error will also be replayed.

    returns

    a replaying Flux

    Definition Classes
    Flux
  35. final def cancelOn(scheduler: Scheduler): Flux[OUT]

    Prepare this Flux so that subscribers will cancel from it on a specified Scheduler.

    Prepare this Flux so that subscribers will cancel from it on a specified Scheduler.

    scheduler

    the Scheduler to signal cancel on

    returns

    a scheduled cancel Flux

    Definition Classes
    Flux
  36. final def cast[E](clazz: Class[E]): Flux[E]

    Cast the current Flux produced type into a target produced type.

    Cast the current Flux produced type into a target produced type.

    E

    the Flux output type

    clazz

    the target class to cast to

    returns

    a casted Flux

    Definition Classes
    Flux
  37. final def checkpoint(description: String): Flux[OUT]

    Activate assembly tracing for this particular Flux and give it a description that will be reflected in the assembly traceback in case of an error upstream of the checkpoint.

    Activate assembly tracing for this particular Flux and give it a description that will be reflected in the assembly traceback in case of an error upstream of the checkpoint.

    It should be placed towards the end of the reactive chain, as errors triggered downstream of it cannot be observed and augmented with assembly trace.

    The description could for example be a meaningful name for the assembled flux or a wider correlation ID.

    description

    a description to include in the assembly traceback.

    returns

    the assembly tracing Flux.

    Definition Classes
    Flux
  38. final def checkpoint(): Flux[OUT]

    Activate assembly tracing for this particular Flux, in case of an error upstream of the checkpoint.

    Activate assembly tracing for this particular Flux, in case of an error upstream of the checkpoint.

    It should be placed towards the end of the reactive chain, as errors triggered downstream of it cannot be observed and augmented with assembly trace.

    returns

    the assembly tracing Flux.

    Definition Classes
    Flux
  39. def clone(): AnyRef
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )
  40. final def collect[E](containerSupplier: () ⇒ E, collector: (E, OUT) ⇒ Unit): Mono[E]

    Collect the Flux sequence with the given collector and supplied container on subscribe.

    Collect the Flux sequence with the given collector and supplied container on subscribe. The collected result will be emitted when this sequence completes.

    E

    the Flux collected container type

    containerSupplier

    the supplier of the container instance for each Subscriber

    collector

    the consumer of both the container instance and the current value

    returns

    a Mono sequence of the collected value on complete

    Definition Classes
    Flux
  41. final def collectMap[K, V](keyExtractor: (OUT) ⇒ K, valueExtractor: (OUT) ⇒ V, mapSupplier: () ⇒ Map[K, V]): Mono[Map[K, V]]

    Convert all this Flux sequence into a supplied map where the key is extracted by the given function and the value will be the most recent extracted item for this key.

    Convert all this Flux sequence into a supplied map where the key is extracted by the given function and the value will be the most recent extracted item for this key.

    K

    the key extracted from each value of this Flux instance

    V

    the value extracted from each value of this Flux instance

    keyExtractor

    a Function1 to route items into a keyed Traversable

    valueExtractor

    a Function1 to select the data to store from each item

    mapSupplier

    a mutable.Map factory called for each Subscriber

    returns

    a Mono of all last matched key-values from this Flux

    Definition Classes
    Flux
  42. final def collectMap[K, V](keyExtractor: (OUT) ⇒ K, valueExtractor: (OUT) ⇒ V): Mono[Map[K, V]]

    Convert all this Flux sequence into a hashed map where the key is extracted by the given function and the value will be the most recent extracted item for this key.

    Convert all this Flux sequence into a hashed map where the key is extracted by the given function and the value will be the most recent extracted item for this key.

    K

    the key extracted from each value of this Flux instance

    V

    the value extracted from each value of this Flux instance

    keyExtractor

    a Function1 to route items into a keyed Traversable

    valueExtractor

    a Function1 to select the data to store from each item

    returns

    a Mono of all last matched key-values from this Flux

    Definition Classes
    Flux
  43. final def collectMap[K](keyExtractor: (OUT) ⇒ K): Mono[Map[K, OUT]]

    Convert all this Flux sequence into a hashed map where the key is extracted by the given Function1 and the value will be the most recent emitted item for this key.

    Convert all this Flux sequence into a hashed map where the key is extracted by the given Function1 and the value will be the most recent emitted item for this key.

    K

    the key extracted from each value of this Flux instance

    keyExtractor

    a Function1 to route items into a keyed Traversable

    returns

    a Mono of all last matched key-values from this Flux

    Definition Classes
    Flux
  44. final def collectMultimap[K, V](keyExtractor: (OUT) ⇒ K, valueExtractor: (OUT) ⇒ V, mapSupplier: () ⇒ Map[K, Collection[V]]): Mono[Map[K, Traversable[V]]]

    Convert this Flux sequence into a supplied map where the key is extracted by the given function and the value will be all the extracted items for this key.

    Convert this Flux sequence into a supplied map where the key is extracted by the given function and the value will be all the extracted items for this key.

    K

    the key extracted from each value of this Flux instance

    V

    the value extracted from each value of this Flux instance

    keyExtractor

    a Function1 to route items into a keyed Traversable

    valueExtractor

    a Function1 to select the data to store from each item

    mapSupplier

    a Map factory called for each Subscriber

    returns

    a Mono of all matched key-values from this Flux

    Definition Classes
    Flux
  45. final def collectMultimap[K, V](keyExtractor: (OUT) ⇒ K, valueExtractor: (OUT) ⇒ V): Mono[Map[K, Traversable[V]]]

    Convert this Flux sequence into a hashed map where the key is extracted by the given function and the value will be all the extracted items for this key.

    Convert this Flux sequence into a hashed map where the key is extracted by the given function and the value will be all the extracted items for this key.

    K

    the key extracted from each value of this Flux instance

    V

    the value extracted from each value of this Flux instance

    keyExtractor

    a Function1 to route items into a keyed Traversable

    valueExtractor

    a Function1 to select the data to store from each item

    returns

    a Mono of all matched key-values from this Flux

    Definition Classes
    Flux
  46. final def collectMultimap[K](keyExtractor: (OUT) ⇒ K): Mono[Map[K, Traversable[OUT]]]

    Convert this Flux sequence into a hashed map where the key is extracted by the given function and the value will be all the emitted item for this key.

    Convert this Flux sequence into a hashed map where the key is extracted by the given function and the value will be all the emitted item for this key.

    K

    the key extracted from each value of this Flux instance

    keyExtractor

    a Function1 to route items into a keyed Traversable

    returns

    a Mono of all matched key-values from this Flux

    Definition Classes
    Flux
  47. final def collectSeq(): Mono[Seq[OUT]]

    Accumulate this Flux sequence in a Seq that is emitted to the returned Mono on onComplete.

    Accumulate this Flux sequence in a Seq that is emitted to the returned Mono on onComplete.

    returns

    a Mono of all values from this Flux

    Definition Classes
    Flux
  48. final def collectSortedSeq(ordering: Ordering[OUT]): Mono[Seq[OUT]]

    Accumulate and sort using the given comparator this Flux sequence in a Seq that is emitted to the returned Mono on onComplete.

    Accumulate and sort using the given comparator this Flux sequence in a Seq that is emitted to the returned Mono on onComplete.

    ordering

    a Ordering to sort the items of this sequences

    returns

    a Mono of all sorted values from this Flux

    Definition Classes
    Flux
  49. final def collectSortedSeq(): Mono[Seq[OUT]]

    Accumulate and sort this Flux sequence in a Seq that is emitted to the returned Mono on onComplete.

    Accumulate and sort this Flux sequence in a Seq that is emitted to the returned Mono on onComplete.

    returns

    a Mono of all sorted values from this Flux

    Definition Classes
    Flux
  50. final def compose[V](transformer: (Flux[OUT]) ⇒ Publisher[V]): Flux[V]

    Defer the transformation of this Flux in order to generate a target Flux for each new Subscriber.

    Defer the transformation of this Flux in order to generate a target Flux for each new Subscriber.

    flux.compose(Mono::from).subscribe()

    V

    the item type in the returned Publisher

    transformer

    the Function1 to map this Flux into a target Publisher instance for each new subscriber

    returns

    a new Flux

    Definition Classes
    Flux
    See also

    Flux.transform for immmediate transformation of Flux

    Flux.as for a loose conversion to an arbitrary type

  51. final def concatMap[V](mapper: (OUT) ⇒ Publisher[_ <: V], prefetch: Int): Flux[V]

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave). Errors will immediately short circuit current concat backlog.

    V

    the produced concatenated type

    mapper

    the function to transform this sequence of T into concatenated sequences of V

    prefetch

    the inner source produced demand

    returns

    a concatenated Flux

    Definition Classes
    Flux
  52. final def concatMap[V](mapper: (OUT) ⇒ Publisher[_ <: V]): Flux[V]

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave). Errors will immediately short circuit current concat backlog.

    V

    the produced concatenated type

    mapper

    the function to transform this sequence of T into concatenated sequences of V

    returns

    a concatenated Flux

    Definition Classes
    Flux
  53. final def concatMapDelayError[V](mapper: (OUT) ⇒ Publisher[_ <: V], delayUntilEnd: Boolean, prefetch: Int): Flux[V]

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Errors will be delayed after the current concat backlog if delayUntilEnd is false or after all sources if delayUntilEnd is true.

    V

    the produced concatenated type

    mapper

    the function to transform this sequence of T into concatenated sequences of V

    delayUntilEnd

    delay error until all sources have been consumed instead of after the current source

    prefetch

    the inner source produced demand

    returns

    a concatenated Flux

    Definition Classes
    Flux
  54. final def concatMapDelayError[V](mapper: (OUT) ⇒ Publisher[_ <: V], prefetch: Int): Flux[V]

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Errors will be delayed after all concated sources terminate.

    V

    the produced concatenated type

    mapper

    the function to transform this sequence of T into concatenated sequences of V

    prefetch

    the inner source produced demand

    returns

    a concatenated Flux

    Definition Classes
    Flux
  55. final def concatMapDelayError[V](mapper: (OUT) ⇒ Publisher[_ <: V]): Flux[V]

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Errors will be delayed after the current concat backlog.

    V

    the produced concatenated type

    mapper

    the function to transform this sequence of T into concatenated sequences of V

    returns

    a concatenated Flux

    Definition Classes
    Flux
  56. final def concatMapIterable[R](mapper: (OUT) ⇒ Iterable[_ <: R], prefetch: Int): Flux[R]

    Bind Iterable sequences given this input sequence like Flux.flatMapIterable, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Bind Iterable sequences given this input sequence like Flux.flatMapIterable, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Errors will be delayed after the current concat backlog.

    R

    the produced concatenated type

    mapper

    the function to transform this sequence of T into concatenated sequences of R

    prefetch

    the inner source produced demand

    returns

    a concatenated Flux

    Definition Classes
    Flux
  57. final def concatMapIterable[R](mapper: (OUT) ⇒ Iterable[_ <: R]): Flux[R]

    Bind Iterable sequences given this input sequence like Flux.flatMapIterable, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Bind Iterable sequences given this input sequence like Flux.flatMapIterable, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Errors will be delayed after the current concat backlog.

    R

    the produced concatenated type

    mapper

    the function to transform this sequence of T into concatenated sequences of R

    returns

    a concatenated Flux

    Definition Classes
    Flux
  58. final def concatWith(other: Publisher[_ <: OUT]): Flux[OUT]

    Concatenate emissions of this Flux with the provided Publisher (no interleave).

    Concatenate emissions of this Flux with the provided Publisher (no interleave).

    other

    the { @link Publisher} sequence to concat after this Flux

    returns

    a concatenated Flux

    Definition Classes
    Flux
  59. def count(): Mono[Long]

    Counts the number of values in this Flux.

    Counts the number of values in this Flux. The count will be emitted when onComplete is observed.

    returns

    a new Mono of Long count

    Definition Classes
    Flux
  60. final def defaultIfEmpty(defaultV: OUT): Flux[OUT]

    Provide a default unique value if this sequence is completed without any data

    Provide a default unique value if this sequence is completed without any data

    defaultV

    the alternate value if this sequence is empty

    returns

    a new Flux

    Definition Classes
    Flux
  61. final def delayElements(delay: Duration, timer: Scheduler): Flux[OUT]

    Delay each of this Flux elements (Subscriber.onNext signals) by a given duration, on a given Scheduler.

    Delay each of this Flux elements (Subscriber.onNext signals) by a given duration, on a given Scheduler.

    delay

    duration to delay each Subscriber.onNext signal

    timer

    the Scheduler to use for delaying each signal

    returns

    a delayed Flux

    Definition Classes
    Flux
    See also

    #delaySubscription(Duration) delaySubscription to introduce a delay at the beginning of the sequence only

  62. final def delayElements(delay: Duration): Flux[OUT]

    Delay each of this Flux elements (Subscriber.onNext signals) by a given duration.

    Delay each of this Flux elements (Subscriber.onNext signals) by a given duration.

    delay

    duration to delay each Subscriber.onNext signal

    returns

    a delayed Flux

    Definition Classes
    Flux
    See also

    #delaySubscription(Duration) delaySubscription to introduce a delay at the beginning of the sequence only

  63. final def delaySequence(delay: Duration, timer: Scheduler): Flux[OUT]

    Shift this Flux forward in time by a given Duration.

    Shift this Flux forward in time by a given Duration. Unlike with Flux.delayElements(Duration), elements are shifted forward in time as they are emitted, always resulting in the delay between two elements being the same as in the source (only the first element is visibly delayed from the previous event, that is the subscription). Signals are delayed and continue on an user-specified Scheduler, but empty sequences or immediate error signals are not delayed.

    With this operator, a source emitting at 10Hz with a delaySequence Duration of 1s will still emit at 10Hz, with an initial "hiccup" of 1s. On the other hand, Flux.delayElements(Duration) would end up emitting at 1Hz.

    This is closer to Flux.delaySubscription(Duration), except the source is subscribed to immediately.

    delay

    Duration to shift the sequence by

    timer

    a time-capable Scheduler instance to delay signals on

    returns

    an shifted Flux emitting at the same frequency as the source

    Definition Classes
    Flux
  64. final def delaySequence(delay: Duration): Flux[OUT]

    Shift this Flux forward in time by a given Duration.

    Shift this Flux forward in time by a given Duration. Unlike with Flux.delayElements(Duration), elements are shifted forward in time as they are emitted, always resulting in the delay between two elements being the same as in the source (only the first element is visibly delayed from the previous event, that is the subscription). Signals are delayed and continue on the parallel Scheduler, but empty sequences or immediate error signals are not delayed.

    With this operator, a source emitting at 10Hz with a delaySequence Duration of 1s will still emit at 10Hz, with an initial "hiccup" of 1s. On the other hand, Flux.delayElements(Duration) would end up emitting at 1Hz.

    This is closer to Flux.delaySubscription(Duration), except the source is subscribed to immediately.

    delay

    Duration to shift the sequence by

    returns

    an shifted Flux emitting at the same frequency as the source

    Definition Classes
    Flux
  65. final def delaySubscription[U](subscriptionDelay: Publisher[U]): Flux[OUT]

    Delay the subscription to the main source until another Publisher signals a value or completes.

    Delay the subscription to the main source until another Publisher signals a value or completes.

    U

    the other source type

    subscriptionDelay

    a Publisher to signal by next or complete this Flux.subscribe

    returns

    a delayed Flux

    Definition Classes
    Flux
  66. final def delaySubscription(delay: Duration, timer: Scheduler): Flux[OUT]

    Delay the subscription to this Flux source until the given period elapses.

    Delay the subscription to this Flux source until the given period elapses.

    delay

    duration before subscribing this Flux

    timer

    a time-capable Scheduler instance to run on

    returns

    a delayed Flux

    Definition Classes
    Flux
  67. final def delaySubscription(delay: Duration): Flux[OUT]

    Delay the subscription to this Flux source until the given period elapses.

    Delay the subscription to this Flux source until the given period elapses.

    delay

    duration before subscribing this Flux

    returns

    a delayed Flux

    Definition Classes
    Flux
  68. final def dematerialize[X](): Flux[X]

    A "phantom-operator" working only if this Flux is a emits onNext, onError or onComplete reactor.core.publisher.Signal.

    A "phantom-operator" working only if this Flux is a emits onNext, onError or onComplete reactor.core.publisher.Signal. The relative Subscriber callback will be invoked, error reactor.core.publisher.Signal will trigger onError and complete reactor.core.publisher.Signal will trigger onComplete.

    X

    the dematerialized type

    returns

    a dematerialized Flux

    Definition Classes
    Flux
  69. def dispose(): Unit
    Definition Classes
    FluxProcessor → Disposable
  70. final def distinct[V](keySelector: (OUT) ⇒ V): Flux[OUT]

    For each Subscriber, tracks this Flux values that have been seen and filters out duplicates given the extracted key.

    For each Subscriber, tracks this Flux values that have been seen and filters out duplicates given the extracted key.

    V

    the type of the key extracted from each value in this sequence

    keySelector

    function to compute comparison key for each element

    returns

    a filtering Flux with values having distinct keys

    Definition Classes
    Flux
  71. final def distinct(): Flux[OUT]

    For each Subscriber, tracks this Flux values that have been seen and filters out duplicates.

    For each Subscriber, tracks this Flux values that have been seen and filters out duplicates.

    returns

    a filtering Flux with unique values

    Definition Classes
    Flux
  72. final def distinctUntilChanged[V](keySelector: (OUT) ⇒ V, keyComparator: (V, V) ⇒ Boolean): Flux[OUT]

    Filter out subsequent repetitions of an element (that is, if they arrive right after one another), as compared by a key extracted through the user provided Function1 and then comparing keys with the supplied Function2.

    Filter out subsequent repetitions of an element (that is, if they arrive right after one another), as compared by a key extracted through the user provided Function1 and then comparing keys with the supplied Function2.

    V

    the type of the key extracted from each value in this sequence

    keySelector

    function to compute comparison key for each element

    keyComparator

    predicate used to compare keys.

    returns

    a filtering Flux with only one occurrence in a row of each element of the same key for which the predicate returns true (yet element keys can repeat in the overall sequence)

    Definition Classes
    Flux
  73. final def distinctUntilChanged[V](keySelector: (OUT) ⇒ V): Flux[OUT]

    Filters out subsequent and repeated elements provided a matching extracted key.

    Filters out subsequent and repeated elements provided a matching extracted key.

    V

    the type of the key extracted from each value in this sequence

    keySelector

    function to compute comparison key for each element

    returns

    a filtering Flux with conflated repeated elements given a comparison key

    Definition Classes
    Flux
  74. final def distinctUntilChanged(): Flux[OUT]

    Filters out subsequent and repeated elements.

    Filters out subsequent and repeated elements.

    returns

    a filtering Flux with conflated repeated elements

    Definition Classes
    Flux
  75. final def doAfterTerminate(afterTerminate: () ⇒ Unit): Flux[OUT]

    Triggered after the Flux terminates, either by completing downstream successfully or with an error.

    Triggered after the Flux terminates, either by completing downstream successfully or with an error.

    afterTerminate

    the callback to call after Subscriber.onComplete or Subscriber.onError

    returns

    an observed Flux

    Definition Classes
    Flux
  76. final def doFinally(onFinally: (SignalType) ⇒ Unit): Flux[OUT]
    Definition Classes
    Flux
  77. final def doOnCancel(onCancel: () ⇒ Unit): Flux[OUT]

    Triggered when the Flux is cancelled.

    Triggered when the Flux is cancelled.

    onCancel

    the callback to call on Subscription.cancel

    returns

    an observed Flux

    Definition Classes
    Flux
  78. final def doOnComplete(onComplete: () ⇒ Unit): Flux[OUT]

    Triggered when the Flux completes successfully.

    Triggered when the Flux completes successfully.

    onComplete

    the callback to call on Subscriber#onComplete

    returns

    an observed Flux

    Definition Classes
    Flux
  79. final def doOnEach(signalConsumer: (Signal[OUT]) ⇒ Unit): Flux[OUT]

    Triggers side-effects when the Flux emits an item, fails with an error or completes successfully.

    Triggers side-effects when the Flux emits an item, fails with an error or completes successfully. All these events are represented as a Signal that is passed to the side-effect callback. Note that this is an advanced operator, typically used for monitoring of a Flux.

    signalConsumer

    the mandatory callback to call on Subscriber.onNext, Subscriber.onError and Subscriber#onComplete

    returns

    an observed Flux

    Definition Classes
    Flux
    See also

    Flux.doOnNext

    Flux.doOnError

    Flux.doOnComplete

    Flux.materialize

    Signal

  80. final def doOnError(predicate: (Throwable) ⇒ Boolean, onError: (Throwable) ⇒ Unit): Flux[OUT]

    Triggered when the Flux completes with an error matching the given exception.

    Triggered when the Flux completes with an error matching the given exception.

    predicate

    the matcher for exceptions to handle

    onError

    the error handler for each error

    returns

    an observed Flux

    Definition Classes
    Flux
  81. final def doOnError[E <: Throwable](exceptionType: Class[E], onError: (E) ⇒ Unit): Flux[OUT]

    Triggered when the Flux completes with an error matching the given exception type.

    Triggered when the Flux completes with an error matching the given exception type.

    E

    type of the error to handle

    exceptionType

    the type of exceptions to handle

    onError

    the error handler for each error

    returns

    an observed Flux

    Definition Classes
    Flux
  82. final def doOnError(onError: (Throwable) ⇒ Unit): Flux[OUT]

    Triggered when the Flux completes with an error.

    Triggered when the Flux completes with an error.

    onError

    the callback to call on Subscriber.onError

    returns

    an observed Flux

    Definition Classes
    Flux
  83. final def doOnNext(onNext: (OUT) ⇒ Unit): Flux[OUT]

    Triggered when the Flux emits an item.

    Triggered when the Flux emits an item.

    onNext

    the callback to call on Subscriber.onNext

    returns

    an observed Flux

    Definition Classes
    Flux
  84. final def doOnRequest(consumer: (Long) ⇒ Unit): Flux[OUT]

    Attach a Long customer to this Flux that will observe any request to this Flux.

    Attach a Long customer to this Flux that will observe any request to this Flux.

    consumer

    the consumer to invoke on each request

    returns

    an observed Flux

    Definition Classes
    Flux
  85. final def doOnSubscribe(onSubscribe: (Subscription) ⇒ Unit): Flux[OUT]

    Triggered when the Flux is subscribed.

    Triggered when the Flux is subscribed.

    onSubscribe

    the callback to call on org.reactivestreams.Subscriber.onSubscribe

    returns

    an observed Flux

    Definition Classes
    Flux
  86. final def doOnTerminate(onTerminate: () ⇒ Unit): Flux[OUT]

    Triggered when the Flux terminates, either by completing successfully or with an error.

    Triggered when the Flux terminates, either by completing successfully or with an error.

    onTerminate

    the callback to call on Subscriber.onComplete or Subscriber.onError

    returns

    an observed Flux

    Definition Classes
    Flux
  87. def downstreamCount: Long

    Return the number of active Subscriber or -1 if untracked.

    Return the number of active Subscriber or -1 if untracked.

    returns

    the number of active Subscriber or -1 if untracked

  88. final def elapsed(scheduler: Scheduler): Flux[(Long, OUT)]

    Map this Flux sequence into Tuple2 of T1 Long timemillis and T2 T associated data.

    Map this Flux sequence into Tuple2 of T1 Long timemillis and T2 T associated data. The timemillis corresponds to the elapsed time between the subscribe and the first next signal OR between two next signals.

    scheduler

    a Scheduler instance to read time from

    returns

    a transforming Flux that emits tuples of time elapsed in milliseconds and matching data

    Definition Classes
    Flux
  89. final def elapsed(): Flux[(Long, OUT)]

    Map this Flux sequence into Tuple2 of T1 Long timemillis and T2 T associated data.

    Map this Flux sequence into Tuple2 of T1 Long timemillis and T2 T associated data. The timemillis corresponds to the elapsed time between the subscribe and the first next signal OR between two next signals.

    returns

    a transforming Flux that emits tuples of time elapsed in milliseconds and matching data

    Definition Classes
    Flux
  90. final def elementAt(index: Int, defaultValue: OUT): Mono[OUT]

    Emit only the element at the given index position or signals a default value if specified if the sequence is shorter.

    Emit only the element at the given index position or signals a default value if specified if the sequence is shorter.

    index

    index of an item

    defaultValue

    supply a default value if not found

    returns

    a Mono of the item at a specified index or a default value

    Definition Classes
    Flux
  91. final def elementAt(index: Int): Mono[OUT]

    Emit only the element at the given index position or IndexOutOfBoundsException if the sequence is shorter.

    Emit only the element at the given index position or IndexOutOfBoundsException if the sequence is shorter.

    index

    index of an item

    returns

    a Mono of the item at a specified index

    Definition Classes
    Flux
  92. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  93. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  94. def error: Option[Throwable]

    Current error if any, default to None

    Current error if any, default to None

    returns

    Current error if any, default to None

  95. final def expand(expander: (OUT) ⇒ Publisher[_ <: OUT]): Flux[OUT]

    Recursively expand elements into a graph and emit all the resulting element using a breadth-first traversal strategy.

    Recursively expand elements into a graph and emit all the resulting element using a breadth-first traversal strategy.

    That is: emit the values from this Flux first, then expand each at a first level of recursion and emit all of the resulting values, then expand all of these at a second level and so on..

    For example, given the hierarchical structure

    A
      - AA
        - aa1
    B
      - BB
        - bb1
    

    Expands Flux.just(A, B) into

    A
    B
    AA
    AB
    aa1
    bb1
    

    expander

    the Function1 applied at each level of recursion to expand values into a Publisher, producing a graph.

    returns

    an breadth-first expanded Flux

    Definition Classes
    Flux
  96. final def expand(expander: (OUT) ⇒ Publisher[_ <: OUT], capacityHint: Int): Flux[OUT]

    Recursively expand elements into a graph and emit all the resulting element using a breadth-first traversal strategy.

    Recursively expand elements into a graph and emit all the resulting element using a breadth-first traversal strategy.

    That is: emit the values from this Flux first, then expand each at a first level of recursion and emit all of the resulting values, then expand all of these at a second level and so on..

    For example, given the hierarchical structure

    A
      - AA
        - aa1
    B
      - BB
        - bb1
    

    Expands Flux.just(A, B) into

    A
    B
    AA
    AB
    aa1
    bb1
    

    expander

    the Function1 applied at each level of recursion to expand values into a Publisher, producing a graph.

    capacityHint

    a capacity hint to prepare the inner queues to accommodate n elements per level of recursion.

    returns

    an breadth-first expanded Flux

    Definition Classes
    Flux
  97. final def expandDeep(expander: (OUT) ⇒ Publisher[_ <: OUT]): Flux[OUT]

    Recursively expand elements into a graph and emit all the resulting element, in a depth-first traversal order.

    Recursively expand elements into a graph and emit all the resulting element, in a depth-first traversal order.

    That is: emit one value from this Flux, expand it and emit the first value at this first level of recursion, and so on... When no more recursion is possible, backtrack to the previous level and re-apply the strategy.

    For example, given the hierarchical structure

    A
      - AA
        - aa1
    B
      - BB
        - bb1
    

    Expands Flux.just(A, B) into

    A
    AA
    aa1
    B
    BB
    bb1
    

    expander

    the Function1 applied at each level of recursion to expand values into a Publisher, producing a graph.

    returns

    a Flux expanded depth-first

    Definition Classes
    Flux
  98. final def expandDeep(expander: (OUT) ⇒ Publisher[_ <: OUT], capacityHint: Int): Flux[OUT]

    Recursively expand elements into a graph and emit all the resulting element, in a depth-first traversal order.

    Recursively expand elements into a graph and emit all the resulting element, in a depth-first traversal order.

    That is: emit one value from this Flux, expand it and emit the first value at this first level of recursion, and so on... When no more recursion is possible, backtrack to the previous level and re-apply the strategy.

    For example, given the hierarchical structure

    A
      - AA
        - aa1
    B
      - BB
        - bb1
    

    Expands Flux.just(A, B) into

    A
    AA
    aa1
    B
    BB
    bb1
    

    expander

    the Function1 applied at each level of recursion to expand values into a Publisher, producing a graph.

    capacityHint

    a capacity hint to prepare the inner queues to accommodate n elements per level of recursion.

    returns

    a Flux expanded depth-first

    Definition Classes
    Flux
  99. final def filter(p: (OUT) ⇒ Boolean): Flux[OUT]

    Evaluate each accepted value against the given predicate T => Boolean.

    Evaluate each accepted value against the given predicate T => Boolean. If the predicate test succeeds, the value is passed into the new Flux. If the predicate test fails, the value is ignored and a request of 1 is emitted.

    p

    the Function1 predicate to test values against

    returns

    a new Flux containing only values that pass the predicate test

    Definition Classes
    FluxFilter
  100. final def filterWhen(asyncPredicate: Function1[OUT, _ <: Publisher[Boolean] with MapablePublisher[Boolean]], bufferSize: Int): Flux[OUT]

    Test each value emitted by this Flux asynchronously using a generated Publisher[Boolean] test.

    Test each value emitted by this Flux asynchronously using a generated Publisher[Boolean] test. A value is replayed if the first item emitted by its corresponding test is true. It is dropped if its test is either empty or its first emitted value is false.

    Note that only the first value of the test publisher is considered, and unless it is a Mono, test will be cancelled after receiving that first value. Test publishers are generated and subscribed to in sequence.

    asyncPredicate

    the function generating a Publisher of Boolean for each value, to filter the Flux with

    bufferSize

    the maximum expected number of values to hold pending a result of their respective asynchronous predicates, rounded to the next power of two. This is capped depending on the size of the heap and the JVM limits, so be careful with large values (although eg. { @literal 65536} should still be fine). Also serves as the initial request size for the source.

    returns

    a filtered Flux

    Definition Classes
    Flux
  101. final def filterWhen(asyncPredicate: Function1[OUT, _ <: Publisher[Boolean] with MapablePublisher[Boolean]]): Flux[OUT]

    Test each value emitted by this Flux asynchronously using a generated Publisher[Boolean] test.

    Test each value emitted by this Flux asynchronously using a generated Publisher[Boolean] test. A value is replayed if the first item emitted by its corresponding test is true. It is dropped if its test is either empty or its first emitted value is false.

    Note that only the first value of the test publisher is considered, and unless it is a Mono, test will be cancelled after receiving that first value. Test publishers are generated and subscribed to in sequence.

    asyncPredicate

    the function generating a Publisher of Boolean for each value, to filter the Flux with

    returns

    a filtered Flux

    Definition Classes
    Flux
  102. def finalize(): Unit
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  103. final def flatMap[R](mapperOnNext: (OUT) ⇒ Publisher[_ <: R], mapperOnError: (Throwable) ⇒ Publisher[_ <: R], mapperOnComplete: () ⇒ Publisher[_ <: R]): Flux[R]

    Transform the signals emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.

    Transform the signals emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave. OnError will be transformed into completion signal after its mapping callback has been applied.

    R

    the output Publisher type target

    mapperOnNext

    the Function1 to call on next data and returning a sequence to merge

    mapperOnError

    the Function to call on error signal and returning a sequence to merge

    mapperOnComplete

    the Function1 to call on complete signal and returning a sequence to merge

    returns

    a new Flux

    Definition Classes
    Flux
  104. final def flatMap[V](mapper: (OUT) ⇒ Publisher[_ <: V], concurrency: Int, prefetch: Int): Flux[V]

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave. The concurrency argument allows to control how many merged Publisher can happen in parallel. The prefetch argument allows to give an arbitrary prefetch size to the merged Publisher.

    V

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Publisher

    concurrency

    the maximum in-flight elements from this Flux sequence

    prefetch

    the maximum in-flight elements from each inner Publisher sequence

    returns

    a merged Flux

    Definition Classes
    Flux
  105. final def flatMap[V](mapper: (OUT) ⇒ Publisher[_ <: V], concurrency: Int): Flux[V]

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave. The concurrency argument allows to control how many merged Publisher can happen in parallel.

    V

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Publisher

    concurrency

    the maximum in-flight elements from this Flux sequence

    returns

    a new Flux

    Definition Classes
    Flux
  106. final def flatMap[R](mapper: (OUT) ⇒ Publisher[_ <: R]): Flux[R]

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.

    R

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Publisher

    returns

    a new Flux

    Definition Classes
    Flux
  107. final def flatMapDelayError[V](mapper: (OUT) ⇒ Publisher[_ <: V], concurrency: Int, prefetch: Int): Flux[V]

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave.

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, so that they may interleave. The concurrency argument allows to control how many merged Publisher can happen in parallel. The prefetch argument allows to give an arbitrary prefetch size to the merged Publisher. This variant will delay any error until after the rest of the flatMap backlog has been processed.

    V

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Publisher

    concurrency

    the maximum in-flight elements from this Flux sequence

    prefetch

    the maximum in-flight elements from each inner Publisher sequence

    returns

    a merged Flux

    Definition Classes
    Flux
  108. final def flatMapIterable[R](mapper: (OUT) ⇒ Iterable[_ <: R], prefetch: Int): Flux[R]

    Transform the items emitted by this Flux into Iterable, then flatten the emissions from those by merging them into a single Flux.

    Transform the items emitted by this Flux into Iterable, then flatten the emissions from those by merging them into a single Flux. The prefetch argument allows to give an arbitrary prefetch size to the merged Iterable.

    R

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Iterable

    prefetch

    the maximum in-flight elements from each inner Iterable sequence

    returns

    a merged Flux

    Definition Classes
    Flux
  109. final def flatMapIterable[R](mapper: (OUT) ⇒ Iterable[_ <: R]): Flux[R]

    Transform the items emitted by this Flux into Iterable, then flatten the elements from those by merging them into a single Flux.

    Transform the items emitted by this Flux into Iterable, then flatten the elements from those by merging them into a single Flux. The prefetch argument allows to give an arbitrary prefetch size to the merged Iterable.

    R

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Iterable

    returns

    a merged Flux

    Definition Classes
    Flux
  110. final def flatMapSequential[R](mapper: (OUT) ⇒ Publisher[_ <: R], maxConcurrency: Int, prefetch: Int): Flux[R]

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order.

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order. Unlike concatMap, transformed inner Publishers are subscribed to eagerly. Unlike flatMap, their emitted elements are merged respecting the order of the original sequence. The concurrency argument allows to control how many merged Publisher can happen in parallel. The prefetch argument allows to give an arbitrary prefetch size to the merged Publisher.

    R

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Publisher

    maxConcurrency

    the maximum in-flight elements from this Flux sequence

    prefetch

    the maximum in-flight elements from each inner Publisher sequence

    returns

    a merged Flux

    Definition Classes
    Flux
  111. final def flatMapSequential[R](mapper: (OUT) ⇒ Publisher[_ <: R], maxConcurrency: Int): Flux[R]

    Transform the items emitted by this Flux Flux} into Publishers, then flatten the emissions from those by merging them into a single Flux, in order.

    Transform the items emitted by this Flux Flux} into Publishers, then flatten the emissions from those by merging them into a single Flux, in order. Unlike concatMap, transformed inner Publishers are subscribed to eagerly. Unlike flatMap, their emitted elements are merged respecting the order of the original sequence. The concurrency argument allows to control how many merged Publisher can happen in parallel.

    R

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Publisher

    maxConcurrency

    the maximum in-flight elements from this Flux sequence

    returns

    a merged Flux

    Definition Classes
    Flux
  112. final def flatMapSequential[R](mapper: (OUT) ⇒ Publisher[_ <: R]): Flux[R]

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order.

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order. Unlike concatMap, transformed inner Publishers are subscribed to eagerly. Unlike flatMap, their emitted elements are merged respecting the order of the original sequence.

    R

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Publisher

    returns

    a merged Flux

    Definition Classes
    Flux
  113. final def flatMapSequentialDelayError[R](mapper: (OUT) ⇒ Publisher[_ <: R], maxConcurrency: Int, prefetch: Int): Flux[R]

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order.

    Transform the items emitted by this Flux into Publishers, then flatten the emissions from those by merging them into a single Flux, in order. Unlike concatMap, transformed inner Publishers are subscribed to eagerly. Unlike flatMap, their emitted elements are merged respecting the order of the original sequence. The concurrency argument allows to control how many merged Publisher can happen in parallel. The prefetch argument allows to give an arbitrary prefetch size to the merged Publisher. This variant will delay any error until after the rest of the flatMap backlog has been processed.

    R

    the merged output sequence type

    mapper

    the Function1 to transform input sequence into N sequences Publisher

    maxConcurrency

    the maximum in-flight elements from this Flux sequence

    prefetch

    the maximum in-flight elements from each inner Publisher sequence

    returns

    a merged Flux, subscribing early but keeping the original ordering

    Definition Classes
    Flux
  114. final def flatten[S](implicit ev: <:<[OUT, Flux[S]]): Flux[S]

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave).

    Bind dynamic sequences given this input sequence like Flux.flatMap, but preserve ordering and concatenate emissions instead of merging (no interleave). Errors will immediately short circuit current concat backlog.

    Alias for concatMap.

    returns

    a concatenated Flux

    Definition Classes
    FluxLike
  115. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  116. def getPrefetch: Long

    The prefetch configuration of the Flux

    The prefetch configuration of the Flux

    returns

    the prefetch configuration of the Flux, -1L if unspecified

    Definition Classes
    Flux
  117. final def groupBy[K, V](keyMapper: (OUT) ⇒ K, valueMapper: (OUT) ⇒ V, prefetch: Int): Flux[GroupedFlux[K, V]]

    Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.

    Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper. It will use the given value mapper to extract the element to route.

    K

    the key type extracted from each value of this sequence

    V

    the value type extracted from each value of this sequence

    keyMapper

    the key mapping function that evaluates an incoming data and returns a key.

    valueMapper

    the value mapping function that evaluates which data to extract for re-routing.

    prefetch

    the number of values to prefetch from the source

    returns

    a Flux of GroupedFlux grouped sequences

    Definition Classes
    Flux
  118. final def groupBy[K, V](keyMapper: (OUT) ⇒ K, valueMapper: (OUT) ⇒ V): Flux[GroupedFlux[K, V]]

    Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.

    Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper. It will use the given value mapper to extract the element to route.

    K

    the key type extracted from each value of this sequence

    V

    the value type extracted from each value of this sequence

    keyMapper

    the key mapping function that evaluates an incoming data and returns a key.

    valueMapper

    the value mapping function that evaluates which data to extract for re-routing.

    returns

    a Flux of GroupedFlux grouped sequences

    Definition Classes
    Flux
  119. final def groupBy[K](keyMapper: (OUT) ⇒ K, prefetch: Int): Flux[GroupedFlux[K, OUT]]

    Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.

    Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.

    K

    the key type extracted from each value of this sequence

    keyMapper

    the key mapping Function1 that evaluates an incoming data and returns a key.

    prefetch

    the number of values to prefetch from the source

    returns

    a Flux of GroupedFlux grouped sequences

    Definition Classes
    Flux
  120. final def groupBy[K](keyMapper: (OUT) ⇒ K): Flux[GroupedFlux[K, OUT]]

    Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.

    Re-route this sequence into dynamically created Flux for each unique key evaluated by the given key mapper.

    K

    the key type extracted from each value of this sequence

    keyMapper

    the key mapping Function1 that evaluates an incoming data and returns a key.

    returns

    a Flux of GroupedFlux grouped sequences

    Definition Classes
    Flux
  121. final def groupJoin[TRight, TLeftEnd, TRightEnd, R](other: Publisher[_ <: TRight], leftEnd: (OUT) ⇒ Publisher[TLeftEnd], rightEnd: (TRight) ⇒ Publisher[TRightEnd], resultSelector: (OUT, Flux[TRight]) ⇒ R): Flux[R]

    Returns a Flux that correlates two Publishers when they overlap in time and groups the results.

    Returns a Flux that correlates two Publishers when they overlap in time and groups the results.

    There are no guarantees in what order the items get combined when multiple items from one or both source Publishers overlap.

    Unlike Flux.join, items from the right Publisher will be streamed into the right resultSelector argument Flux.

    TRight

    the type of the right Publisher

    TLeftEnd

    this Flux timeout type

    TRightEnd

    the right Publisher timeout type

    R

    the combined result type

    other

    the other Publisher to correlate items from the source Publisher with

    leftEnd

    a function that returns a Publisher whose emissions indicate the duration of the values of the source Publisher

    rightEnd

    a function that returns a Publisher whose emissions indicate the duration of the values of the right Publisher

    resultSelector

    a function that takes an item emitted by each Publisher and returns the value to be emitted by the resulting Publisher

    returns

    a joining Flux

    Definition Classes
    Flux
  122. final def handle[R](handler: (OUT, SynchronousSink[R]) ⇒ Unit): Flux[R]

    Handle the items emitted by this Flux by calling a biconsumer with the output sink for each onNext.

    Handle the items emitted by this Flux by calling a biconsumer with the output sink for each onNext. At most one SynchronousSink.next call must be performed and/or 0 or 1 SynchronousSink.error or SynchronousSink.complete.

    R

    the transformed type

    handler

    the handling BiConsumer

    returns

    a transformed Flux

    Definition Classes
    Flux
  123. def hasCompleted: Boolean

    Return true if terminated with onComplete

    Return true if terminated with onComplete

    returns

    true if terminated with onComplete

  124. def hasDownstreams: Boolean

    Return true if any Subscriber is actively subscribed

    Return true if any Subscriber is actively subscribed

    returns

    true if any Subscriber is actively subscribed

  125. final def hasElement(value: OUT): Mono[Boolean]

    Emit a single boolean true if any of the values of this Flux sequence match the constant.

    Emit a single boolean true if any of the values of this Flux sequence match the constant.

    The implementation uses short-circuit logic and completes with true if the constant matches a value.

    value

    constant compared to incoming signals

    returns

    a new Mono with true if any value satisfies a predicate and false otherwise

    Definition Classes
    Flux
  126. final def hasElements(): Mono[Boolean]

    Emit a single boolean true if this Flux sequence has at least one element.

    Emit a single boolean true if this Flux sequence has at least one element.

    The implementation uses short-circuit logic and completes with true on onNext.

    returns

    a new Mono with true if any value is emitted and false otherwise

    Definition Classes
    Flux
  127. def hasError: Boolean

    Return true if terminated with onError

    Return true if terminated with onError

    returns

    true if terminated with onError

  128. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  129. final def hide(): Flux[OUT]

    Hides the identities of this Flux and its Subscription as well.

    Hides the identities of this Flux and its Subscription as well.

    returns

    a new Flux defeating any Publisher / Subscription feature-detection

    Definition Classes
    Flux
  130. final def ignoreElements(): Mono[OUT]

    Ignores onNext signals (dropping them) and only reacts on termination.

    Ignores onNext signals (dropping them) and only reacts on termination.

    returns

    a new completable Mono.

    Definition Classes
    Flux
  131. final def index[I](indexMapper: (Long, OUT) ⇒ I): Flux[I]

    Keep information about the order in which source values were received by indexing them internally with a 0-based incrementing long then combining this information with the source value into a I using the provided Function2 , returning a Flux[I].

    Keep information about the order in which source values were received by indexing them internally with a 0-based incrementing long then combining this information with the source value into a I using the provided Function2 , returning a Flux[I].

    Typical usage would be to produce a scala.Tuple2 similar to Flux.index(), but 1-based instead of 0-based:

    index((i, v) => (i+1, v))

    indexMapper

    the Function2 to use to combine elements and their index.

    returns

    an indexed Flux with each source value combined with its computed index.

    Definition Classes
    Flux
  132. final def index(): Flux[(Long, OUT)]

    Keep information about the order in which source values were received by indexing them with a 0-based incrementing long, returning a Flux of value

    Keep information about the order in which source values were received by indexing them with a 0-based incrementing long, returning a Flux of value

    returns

    an indexed Flux with each source value combined with its 0-based index.

    Definition Classes
    Flux
  133. def inners(): Stream[_ <: Scannable]
    Definition Classes
    FluxProcessorScannable
  134. def isDisposed(): Boolean
    Definition Classes
    Disposable
  135. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  136. def isScanAvailable: Boolean
    Definition Classes
    Scannable
  137. def isSerialized: Boolean

    Return true if this FluxProcessor supports multithread producing

    Return true if this FluxProcessor supports multithread producing

    returns

    true if this FluxProcessor supports multithread producing

  138. def isTerminated: Boolean

    Has this upstream finished or "completed" / "failed" ?

    Has this upstream finished or "completed" / "failed" ?

    returns

    has this upstream finished or "completed" / "failed" ?

  139. def jScannable: core.Scannable
    Definition Classes
    FluxScannable
  140. final def join[TRight, TLeftEnd, TRightEnd, R](other: Publisher[_ <: TRight], leftEnd: (OUT) ⇒ Publisher[TLeftEnd], rightEnd: (TRight) ⇒ Publisher[TRightEnd], resultSelector: (OUT, TRight) ⇒ R): Flux[R]

    Returns a Flux that correlates two Publishers when they overlap in time and groups the results.

    Returns a Flux that correlates two Publishers when they overlap in time and groups the results.

    There are no guarantees in what order the items get combined when multiple items from one or both source Publishers overlap.

    TRight

    the type of the right Publisher

    TLeftEnd

    this Flux timeout type

    TRightEnd

    the right Publisher timeout type

    R

    the combined result type

    other

    the other Publisher to correlate items from the source Publisher with

    leftEnd

    a function that returns a Publisher whose emissions indicate the duration of the values of the source Publisher

    rightEnd

    a function that returns a Publisher whose emissions indicate the duration of the values of the { @code right} Publisher

    resultSelector

    a function that takes an item emitted by each Publisher and returns the value to be emitted by the resulting Publisher

    returns

    a joining Flux

    Definition Classes
    Flux
  141. final def last(defaultValue: OUT): Mono[OUT]

    Signal the last element observed before complete signal or emit the defaultValue if empty.

    Signal the last element observed before complete signal or emit the defaultValue if empty. For a passive version use Flux.takeLast

    defaultValue

    a single fallback item if this Flux is empty

    returns

    a limited Flux

    Definition Classes
    Flux
  142. final def last(): Mono[OUT]

    Signal the last element observed before complete signal or emit NoSuchElementException error if the source was empty.

    Signal the last element observed before complete signal or emit NoSuchElementException error if the source was empty. For a passive version use Flux.takeLast

    returns

    a limited Flux

    Definition Classes
    Flux
  143. final def limitRate(prefetchRate: Int): Flux[OUT]

    Ensure that backpressure signals from downstream subscribers are capped at the provided prefetchRate when propagated upstream, effectively rate limiting the upstream Publisher.

    Ensure that backpressure signals from downstream subscribers are capped at the provided prefetchRate when propagated upstream, effectively rate limiting the upstream Publisher.

    Typically used for scenarios where consumer(s) request a large amount of data (eg. Long.MaxValue) but the data source behaves better or can be optimized with smaller requests (eg. database paging, etc...). All data is still processed.

    Equivalent to flux.publishOn(Schedulers.immediate(), prefetchRate).subscribe()

    prefetchRate

    the limit to apply to downstream's backpressure

    returns

    a Flux limiting downstream's backpressure

    Definition Classes
    Flux
    See also

  144. final def log(category: String, level: Level, showOperatorLine: Boolean, options: SignalType*): Flux[OUT]

    Observe Reactive Streams signals matching the passed filter options and use Logger support to handle trace implementation.

    Observe Reactive Streams signals matching the passed filter options and use Logger support to handle trace implementation. Default will use the passed Level and java.util.logging. If SLF4J is available, it will be used instead.

    Options allow fine grained filtering of the traced signal, for instance to only capture onNext and onError:

        flux.log("category", Level.INFO, SignalType.ON_NEXT, SignalType.ON_ERROR)
    
    
    
    @param category         to be mapped into logger configuration (e.g. org.springframework
                            .reactor). If category ends with "." like "reactor.", a generated operator
                            suffix will complete, e.g. "reactor.Flux.Map".
    @param level            the [[Level]] to enforce for this tracing Flux (only FINEST, FINE,
                            INFO, WARNING and SEVERE are taken into account)
    @param showOperatorLine capture the current stack to display operator
                            class/line number.
    @param options          a vararg [[SignalType]] option to filter log messages
    @return a new unaltered [[Flux]]
    

    Definition Classes
    Flux
  145. final def log(category: String, level: Level, options: SignalType*): Flux[OUT]

    Observe Reactive Streams signals matching the passed filter options and use Logger support to handle trace implementation.

    Observe Reactive Streams signals matching the passed filter options and use Logger support to handle trace implementation. Default will use the passed Level and java.util.logging. If SLF4J is available, it will be used instead.

    Options allow fine grained filtering of the traced signal, for instance to only capture onNext and onError:

        flux.log("category", Level.INFO, SignalType.ON_NEXT, SignalType.ON_ERROR)
    
    
    
    
    
    
    
    
    @param category to be mapped into logger configuration (e.g. org.springframework
                    .reactor). If category ends with "." like "reactor.", a generated operator
                    suffix will complete, e.g. "reactor.Flux.Map".
    @param level    the [[Level]] to enforce for this tracing Flux (only FINEST, FINE,
                    INFO, WARNING and SEVERE are taken into account)
    @param options  a vararg [[SignalType]] option to filter log messages
    @return a new unaltered [[Flux]]
    

    Definition Classes
    Flux
  146. final def log(category: String): Flux[OUT]

    Observe all Reactive Streams signals and use Logger support to handle trace implementation.

    Observe all Reactive Streams signals and use Logger support to handle trace implementation. Default will use Level.INFO and java.util.logging. If SLF4J is available, it will be used instead.

    category

    to be mapped into logger configuration (e.g. org.springframework .reactor). If category ends with "." like "reactor.", a generated operator suffix will complete, e.g. "reactor.Flux.Map".

    returns

    a new unaltered Flux

    Definition Classes
    Flux
  147. final def log(): Flux[OUT]

    Observe all Reactive Streams signals and use Logger support to handle trace implementation.

    Observe all Reactive Streams signals and use Logger support to handle trace implementation. Default will use Level.INFO and java.util.logging. If SLF4J is available, it will be used instead.

    The default log category will be "reactor.*", a generated operator suffix will complete, e.g. "reactor.Flux.Map".

    returns

    a new unaltered Flux

    Definition Classes
    Flux
  148. final def map[V](mapper: (OUT) ⇒ V): Flux[V]

    Transform the items emitted by this Flux by applying a function to each item.

    Transform the items emitted by this Flux by applying a function to each item.

    V

    the transformed type

    mapper

    the transforming Function1

    returns

    a transformed Flux

    Definition Classes
    FluxMapablePublisher
  149. final def materialize(): Flux[Signal[OUT]]

    Transform the incoming onNext, onError and onComplete signals into Signal.

    Transform the incoming onNext, onError and onComplete signals into Signal. Since the error is materialized as a Signal, the propagation will be stopped and onComplete will be emitted. Complete signal will first emit a Signal.complete and then effectively complete the flux.

    returns

    a Flux of materialized Signal

    Definition Classes
    Flux
  150. final def mergeWith(other: Publisher[_ <: OUT]): Flux[OUT]

    Merge data from this Flux and a Publisher into an interleaved merged sequence.

    Merge data from this Flux and a Publisher into an interleaved merged sequence. Unlike {concat, inner sources are subscribed to eagerly.

    Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.

    other

    the Publisher to merge with

    returns

    a new Flux

    Definition Classes
    Flux
  151. final def name(name: String): Flux[OUT]

    Give a name to this sequence, which can be retrieved using reactor.core.scala.Scannable.name() as long as this is the first reachable reactor.core.scala.Scannable.parents().

    Give a name to this sequence, which can be retrieved using reactor.core.scala.Scannable.name() as long as this is the first reachable reactor.core.scala.Scannable.parents().

    name

    a name for the sequence

    returns

    the same sequence, but bearing a name

    Definition Classes
    Flux
  152. def name: String

    Check this Scannable and its Scannable.parents() for a name an return the first one that is reachable.

    Check this Scannable and its Scannable.parents() for a name an return the first one that is reachable.

    returns

    the name of the first parent that has one defined (including this scannable)

    Definition Classes
    Scannable
  153. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  154. final def next(): Mono[OUT]

    Emit only the first item emitted by this Flux.

    Emit only the first item emitted by this Flux.

    returns

    a new Mono

    Definition Classes
    Flux
  155. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  156. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  157. final def ofType[U](clazz: Class[U]): Flux[U]

    Evaluate each accepted value against the given Class type.

    Evaluate each accepted value against the given Class type. If the predicate test succeeds, the value is passed into the new Flux. If the predicate test fails, the value is ignored and a request of 1 is emitted.

    clazz

    the Class type to test values against

    returns

    a new Flux reduced to items converted to the matched type

    Definition Classes
    Flux
  158. final def onBackpressureBuffer(maxSize: Int, onBufferOverflow: (OUT) ⇒ Unit, bufferOverflowStrategy: BufferOverflowStrategy): Flux[OUT]

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream, within a maxSize limit.

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream, within a maxSize limit. Over that limit, the overflow strategy is applied (see BufferOverflowStrategy).

    A Consumer is immediately invoked when there is an overflow, receiving the value that was discarded because of the overflow (which can be different from the latest element emitted by the source in case of a DROP_LATEST strategy).

    Note that for the ERROR strategy, the overflow error will be delayed after the current backlog is consumed. The consumer is still invoked immediately.

    maxSize

    maximum buffer backlog size before overflow callback is called

    onBufferOverflow

    callback to invoke on overflow

    bufferOverflowStrategy

    strategy to apply to overflowing elements

    returns

    a buffering Flux

    Definition Classes
    Flux
  159. final def onBackpressureBuffer(maxSize: Int, bufferOverflowStrategy: BufferOverflowStrategy): Flux[OUT]

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream, within a maxSize limit.

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream, within a maxSize limit. Over that limit, the overflow strategy is applied (see BufferOverflowStrategy).

    Note that for the ERROR strategy, the overflow error will be delayed after the current backlog is consumed.

    maxSize

    maximum buffer backlog size before overflow strategy is applied

    bufferOverflowStrategy

    strategy to apply to overflowing elements

    returns

    a buffering Flux

    Definition Classes
    Flux
  160. final def onBackpressureBuffer(maxSize: Int, onOverflow: (OUT) ⇒ Unit): Flux[OUT]

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream.

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream. Overflow error will be delayed after the current backlog is consumed. However the onOverflow will be immediately invoked.

    maxSize

    maximum buffer backlog size before overflow callback is called

    onOverflow

    callback to invoke on overflow

    returns

    a buffering Flux

    Definition Classes
    Flux
  161. final def onBackpressureBuffer(maxSize: Int): Flux[OUT]

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream.

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream. Errors will be immediately emitted on overflow regardless of the pending buffer.

    maxSize

    maximum buffer backlog size before immediate error

    returns

    a buffering Flux

    Definition Classes
    Flux
  162. final def onBackpressureBuffer(): Flux[OUT]

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream.

    Request an unbounded demand and push the returned Flux, or park the observed elements if not enough demand is requested downstream. Errors will be delayed until the buffer gets consumed.

    returns

    a buffering Flux

    Definition Classes
    Flux
  163. final def onBackpressureDrop(onDropped: (OUT) ⇒ Unit): Flux[OUT]

    Request an unbounded demand and push the returned Flux, or drop and notify dropping consumer with the observed elements if not enough demand is requested downstream.

    Request an unbounded demand and push the returned Flux, or drop and notify dropping consumer with the observed elements if not enough demand is requested downstream.

    onDropped

    the Consumer called when an value gets dropped due to lack of downstream requests

    returns

    a dropping Flux

    Definition Classes
    Flux
  164. final def onBackpressureDrop(): Flux[OUT]

    Request an unbounded demand and push the returned Flux, or drop the observed elements if not enough demand is requested downstream.

    Request an unbounded demand and push the returned Flux, or drop the observed elements if not enough demand is requested downstream.

    returns

    a dropping Flux

    Definition Classes
    Flux
  165. final def onBackpressureError(): Flux[OUT]

    Request an unbounded demand and push the returned Flux, or emit onError fom reactor.core.Exceptions.failWithOverflow if not enough demand is requested downstream.

    Request an unbounded demand and push the returned Flux, or emit onError fom reactor.core.Exceptions.failWithOverflow if not enough demand is requested downstream.

    returns

    an erroring Flux on backpressure

    Definition Classes
    Flux
  166. final def onBackpressureLatest(): Flux[OUT]

    Request an unbounded demand and push the returned Flux, or only keep the most recent observed item if not enough demand is requested downstream.

    Request an unbounded demand and push the returned Flux, or only keep the most recent observed item if not enough demand is requested downstream.

    returns

    a dropping Flux that will only keep a reference to the last observed item

    Definition Classes
    Flux
  167. def onComplete(): Unit
    Definition Classes
    FluxProcessor → Subscriber
  168. def onError(t: Throwable): Unit
    Definition Classes
    FluxProcessor → Subscriber
  169. final def onErrorMap(predicate: (Throwable) ⇒ Boolean, mapper: Function1[Throwable, _ <: Throwable]): Flux[OUT]

    Transform the error emitted by this Flux by applying a function if the error matches the given predicate, otherwise let the error flow.

    Transform the error emitted by this Flux by applying a function if the error matches the given predicate, otherwise let the error flow.

    predicate

    the error predicate

    mapper

    the error transforming Function1

    returns

    a transformed Flux

    Definition Classes
    Flux
  170. final def onErrorMap[E <: Throwable](type: Class[E], mapper: Function1[E, _ <: Throwable]): Flux[OUT]

    Transform the error emitted by this Flux by applying a function if the error matches the given type, otherwise let the error flow.

    Transform the error emitted by this Flux by applying a function if the error matches the given type, otherwise let the error flow.

    <img class="marble" src="https://raw.githubusercontent.com/reactor/reactor-core/v3.1.0.RC1/src/docs/marble/maperror.png"

    E

    the error type

    type

    the class of the exception type to react to

    mapper

    the error transforming Function1

    returns

    a transformed Flux

    Definition Classes
    Flux
  171. final def onErrorMap(mapper: Function1[Throwable, _ <: Throwable]): Flux[OUT]

    Transform the error emitted by this Flux by applying a function.

    Transform the error emitted by this Flux by applying a function.

    mapper

    the error transforming Function1

    returns

    a transformed Flux

    Definition Classes
    Flux
  172. final def onErrorRecover[U <: OUT](pf: PartialFunction[Throwable, U]): Flux[OUT]

    Returns a Flux that mirrors the behavior of the source, unless the source is terminated with an onError, in which case the streaming of events fallbacks to a Flux emitting a single element generated by the backup function.

    Returns a Flux that mirrors the behavior of the source, unless the source is terminated with an onError, in which case the streaming of events fallbacks to a Flux emitting a single element generated by the backup function.

    The created Flux mirrors the behavior of the source in case the source does not end with an error or if the thrown Throwable is not matched.

    See onErrorResume for the version that takes a total function as a parameter.

    pf

    - a function that matches errors with a backup element that is emitted when the source throws an error.

    Definition Classes
    FluxLike
  173. final def onErrorRecoverWith[U <: OUT](pf: PartialFunction[Throwable, Flux[U]]): Flux[OUT]

    Returns a Flux that mirrors the behavior of the source, unless the source is terminated with an onError, in which case the streaming of events continues with the specified backup sequence generated by the given function.

    Returns a Flux that mirrors the behavior of the source, unless the source is terminated with an onError, in which case the streaming of events continues with the specified backup sequence generated by the given function.

    The created Flux mirrors the behavior of the source in case the source does not end with an error or if the thrown Throwable is not matched.

    See onErrorResume for the version that takes a total function as a parameter.

    pf

    is a function that matches errors with a backup throwable that is subscribed when the source throws an error.

    Definition Classes
    FluxLike
  174. final def onErrorResume(predicate: (Throwable) ⇒ Boolean, fallback: Function1[Throwable, _ <: Publisher[_ <: OUT]]): Flux[OUT]

    Subscribe to a returned fallback publisher when an error matching the given type occurs.

    Subscribe to a returned fallback publisher when an error matching the given type occurs.

    alt="">

    predicate

    the error predicate to match

    fallback

    the Function1 mapping the error to a new Publisher sequence

    returns

    a new Flux

    Definition Classes
    Flux
  175. final def onErrorResume[E <: Throwable](type: Class[E], fallback: Function1[E, _ <: Publisher[_ <: OUT]]): Flux[OUT]

    Subscribe to a returned fallback publisher when an error matching the given type occurs.

    Subscribe to a returned fallback publisher when an error matching the given type occurs.

    alt="">

    E

    the error type

    type

    the error type to match

    fallback

    the Function1 mapping the error to a new Publisher sequence

    returns

    a new Flux

    Definition Classes
    Flux
  176. final def onErrorResume[U <: OUT](fallback: Function1[Throwable, _ <: Publisher[_ <: U]]): Flux[U]

    Subscribe to a returned fallback publisher when any error occurs.

    Subscribe to a returned fallback publisher when any error occurs.

    fallback

    the Function1 mapping the error to a new Publisher sequence

    returns

    a new Flux

    Definition Classes
    Flux
  177. final def onErrorReturn(predicate: (Throwable) ⇒ Boolean, fallbackValue: OUT): Flux[OUT]

    Fallback to the given value if an error matching the given predicate is observed on this Flux

    Fallback to the given value if an error matching the given predicate is observed on this Flux

    predicate

    the error predicate to match

    fallbackValue

    alternate value on fallback

    returns

    a new Flux

    Definition Classes
    FluxOnErrorReturn
  178. final def onErrorReturn[E <: Throwable](type: Class[E], fallbackValue: OUT): Flux[OUT]

    Fallback to the given value if an error of a given type is observed on this Flux

    Fallback to the given value if an error of a given type is observed on this Flux

    E

    the error type

    type

    the error type to match

    fallbackValue

    alternate value on fallback

    returns

    a new Flux

    Definition Classes
    FluxOnErrorReturn
  179. final def onErrorReturn(fallbackValue: OUT): Flux[OUT]

    Fallback to the given value if an error is observed on this Flux

    Fallback to the given value if an error is observed on this Flux

    fallbackValue

    alternate value on fallback

    returns

    a new Flux

    Definition Classes
    FluxOnErrorReturn
  180. def onNext(t: IN): Unit
    Definition Classes
    FluxProcessor → Subscriber
  181. def onSubscribe(s: Subscription): Unit
    Definition Classes
    FluxProcessor → Subscriber
  182. final def onTerminateDetach(): Flux[OUT]

    Detaches the both the child Subscriber and the Subscription on termination or cancellation.

    Detaches the both the child Subscriber and the Subscription on termination or cancellation.

    This should help with odd retention scenarios when running with non-reactor Subscriber.

    returns

    a detachable Flux

    Definition Classes
    Flux
  183. def operatorName: String

    Check this Scannable and its Scannable.parents() for a name an return the first one that is reachable.

    Check this Scannable and its Scannable.parents() for a name an return the first one that is reachable.

    returns

    the name of the first parent that has one defined (including this scannable)

    Definition Classes
    Scannable
  184. final def or(other: Publisher[_ <: OUT]): Flux[OUT]

    Pick the first Publisher between this Flux and another publisher to emit any signal (onNext/onError/onComplete) and replay all signals from that Publisher, effectively behaving like the fastest of these competing sources.

    Pick the first Publisher between this Flux and another publisher to emit any signal (onNext/onError/onComplete) and replay all signals from that Publisher, effectively behaving like the fastest of these competing sources.

    other

    the Publisher to race with

    returns

    the fastest sequence

    Definition Classes
    Flux
    See also

    Flux.first

  185. final def parallel(parallelism: Int, prefetch: Int): SParallelFlux[OUT]

    Prepare to consume this Flux on parallelism number of 'rails' in round-robin fashion and use custom prefetch amount and queue for dealing with the source Flux's values.

    Prepare to consume this Flux on parallelism number of 'rails' in round-robin fashion and use custom prefetch amount and queue for dealing with the source Flux's values.

    parallelism

    the number of parallel rails

    prefetch

    the number of values to prefetch from the source

    returns

    a new SParallelFlux instance

    Definition Classes
    Flux
  186. final def parallel(parallelism: Int): SParallelFlux[OUT]

    Prepare to consume this Flux on parallelism number of 'rails' in round-robin fashion.

    Prepare to consume this Flux on parallelism number of 'rails' in round-robin fashion.

    parallelism

    the number of parallel rails

    returns

    a new SParallelFlux instance

    Definition Classes
    Flux
  187. final def parallel(): SParallelFlux[OUT]

    Prepare to consume this Flux on number of 'rails' matching number of CPU in round-robin fashion.

    Prepare to consume this Flux on number of 'rails' matching number of CPU in round-robin fashion.

    returns

    a new SParallelFlux instance

    Definition Classes
    Flux
  188. def parents: Stream[_ <: Scannable]

    Return a Stream navigating the org.reactivestreams.Subscription chain (upward).

    Return a Stream navigating the org.reactivestreams.Subscription chain (upward).

    returns

    a Stream navigating the org.reactivestreams.Subscription chain (upward)

    Definition Classes
    Scannable
  189. final def publish[R](transform: Function1[Flux[OUT], _ <: Publisher[_ <: R]], prefetch: Int): Flux[R]

    Shares a sequence for the duration of a function that may transform it and consume it as many times as necessary without causing multiple subscriptions to the upstream.

    Shares a sequence for the duration of a function that may transform it and consume it as many times as necessary without causing multiple subscriptions to the upstream.

    R

    the output value type

    transform

    the transformation function

    prefetch

    the request size

    returns

    a new Flux

    Definition Classes
    Flux
  190. final def publish[R](transform: Function1[Flux[OUT], _ <: Publisher[_ <: R]]): Flux[R]

    Shares a sequence for the duration of a function that may transform it and consume it as many times as necessary without causing multiple subscriptions to the upstream.

    Shares a sequence for the duration of a function that may transform it and consume it as many times as necessary without causing multiple subscriptions to the upstream.

    R

    the output value type

    transform

    the transformation function

    returns

    a new Flux

    Definition Classes
    Flux
  191. final def publish(prefetch: Int): ConnectableFlux[OUT]

    Prepare a ConnectableFlux which shares this Flux sequence and dispatches values to subscribers in a backpressure-aware manner.

    Prepare a ConnectableFlux which shares this Flux sequence and dispatches values to subscribers in a backpressure-aware manner. This will effectively turn any type of sequence into a hot sequence.

    Backpressure will be coordinated on Subscription.request and if any Subscriber is missing demand (requested = 0), multicast will pause pushing/pulling.

    prefetch

    bounded requested demand

    returns

    a new ConnectableFlux

    Definition Classes
    Flux
  192. final def publish(): ConnectableFlux[OUT]

    Prepare a ConnectableFlux which shares this Flux sequence and dispatches values to subscribers in a backpressure-aware manner.

    Prepare a ConnectableFlux which shares this Flux sequence and dispatches values to subscribers in a backpressure-aware manner. Prefetch will default to reactor.util.concurrent.Queues.SMALL_BUFFER_SIZE. This will effectively turn any type of sequence into a hot sequence.

    Backpressure will be coordinated on Subscription.request and if any Subscriber is missing demand (requested = 0), multicast will pause pushing/pulling.

    returns

    a new ConnectableFlux

    Definition Classes
    Flux
  193. final def publishNext(): Mono[OUT]

    Prepare a Mono which shares this Flux sequence and dispatches the first observed item to subscribers in a backpressure-aware manner.

    Prepare a Mono which shares this Flux sequence and dispatches the first observed item to subscribers in a backpressure-aware manner. This will effectively turn any type of sequence into a hot sequence when the first Subscriber subscribes.

    returns

    a new Mono

    Definition Classes
    Flux
  194. final def publishOn(scheduler: Scheduler, delayError: Boolean, prefetch: Int): Flux[OUT]

    Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.

    Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.

    Typically used for fast publisher, slow consumer(s) scenarios.

    flux.publishOn(Schedulers.single()).subscribe()

    scheduler

    a checked { @link reactor.core.scheduler.Scheduler.Worker} factory

    delayError

    should the buffer be consumed before forwarding any error

    prefetch

    the asynchronous boundary capacity

    returns

    a Flux producing asynchronously

    Definition Classes
    Flux
  195. final def publishOn(scheduler: Scheduler, prefetch: Int): Flux[OUT]

    Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.

    Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.

    Typically used for fast publisher, slow consumer(s) scenarios.

    flux.publishOn(Schedulers.single()).subscribe()

    scheduler

    a checked reactor.core.scheduler.Scheduler.Worker factory

    prefetch

    the asynchronous boundary capacity

    returns

    a Flux producing asynchronously

    Definition Classes
    Flux
  196. final def publishOn(scheduler: Scheduler): Flux[OUT]

    Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.

    Run onNext, onComplete and onError on a supplied Scheduler reactor.core.scheduler.Scheduler.Worker.

    Typically used for fast publisher, slow consumer(s) scenarios.

    flux.publishOn(Schedulers.single()).subscribe()

    scheduler

    a checked reactor.core.scheduler.Scheduler.Worker factory

    returns

    a Flux producing asynchronously

    Definition Classes
    Flux
  197. final def reduce[A](initial: A, accumulator: (A, OUT) ⇒ A): Mono[A]

    Accumulate the values from this Flux sequence into an object matching an initial value type.

    Accumulate the values from this Flux sequence into an object matching an initial value type. The arguments are the N-1 or initial value and N current item .

    A

    the type of the initial and reduced object

    initial

    the initial left argument to pass to the reducing BiFunction

    accumulator

    the reducing BiFunction

    returns

    a reduced Flux

    Definition Classes
    Flux
  198. final def reduce(aggregator: (OUT, OUT) ⇒ OUT): Mono[OUT]

    Aggregate the values from this Flux sequence into an object of the same type than the emitted items.

    Aggregate the values from this Flux sequence into an object of the same type than the emitted items. The left/right BiFunction arguments are the N-1 and N item, ignoring sequence with 0 or 1 element only.

    aggregator

    the aggregating BiFunction

    returns

    a reduced Flux

    Definition Classes
    Flux
  199. final def reduceWith[A](initial: () ⇒ A, accumulator: (A, OUT) ⇒ A): Mono[A]

    Accumulate the values from this Flux sequence into an object matching an initial value type.

    Accumulate the values from this Flux sequence into an object matching an initial value type. The arguments are the N-1 or initial value and N current item .

    A

    the type of the initial and reduced object

    initial

    the initial left argument supplied on subscription to the reducing BiFunction

    accumulator

    the reducing BiFunction

    returns

    a reduced Flux

    Definition Classes
    Flux
  200. final def repeat(numRepeat: Long, predicate: () ⇒ Boolean): Flux[OUT]

    Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.

    Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription. A specified maximum of repeat will limit the number of re-subscribe.

    numRepeat

    the number of times to re-subscribe on complete

    predicate

    the boolean to evaluate on onComplete

    returns

    an eventually repeated Flux on onComplete up to number of repeat specified OR matching predicate

    Definition Classes
    Flux
  201. final def repeat(numRepeat: Long): Flux[OUT]

    Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.

    Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.

    numRepeat

    the number of times to re-subscribe on onComplete

    returns

    an eventually repeated Flux on onComplete up to number of repeat specified

    Definition Classes
    Flux
  202. final def repeat(predicate: () ⇒ Boolean): Flux[OUT]

    Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.

    Repeatedly subscribe to the source if the predicate returns true after completion of the previous subscription.

    predicate

    the boolean to evaluate on onComplete.

    returns

    an eventually repeated Flux on onComplete

    Definition Classes
    Flux
  203. final def repeat(): Flux[OUT]

    Repeatedly subscribe to the source completion of the previous subscription.

    Repeatedly subscribe to the source completion of the previous subscription.

    returns

    an indefinitely repeated Flux on onComplete

    Definition Classes
    Flux
  204. final def repeatWhen(whenFactory: Function1[Flux[Long], _ <: Publisher[_]]): Flux[OUT]

    Repeatedly subscribe to this Flux when a companion sequence signals a number of emitted elements in response to the flux completion signal.

    Repeatedly subscribe to this Flux when a companion sequence signals a number of emitted elements in response to the flux completion signal.

    If the companion sequence signals when this Flux is active, the repeat attempt is suppressed and any terminal signal will terminate this Flux with the same signal immediately.

    whenFactory

    the Function1 providing a Flux signalling an exclusive number of emitted elements on onComplete and returning a Publisher companion.

    returns

    an eventually repeated Flux on onComplete when the companion Publisher produces an onNext signal

    Definition Classes
    Flux
  205. final def replay(history: Int, ttl: Duration): ConnectableFlux[OUT]

    Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber.

    Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber. Will retain up to the given history size onNext signals and given a per-item ttl. Completion and Error will also be replayed.

    history

    number of events retained in history excluding complete and error

    ttl

    Per-item timeout duration

    returns

    a replaying ConnectableFlux

    Definition Classes
    Flux
  206. final def replay(ttl: Duration): ConnectableFlux[OUT]

    Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber.

    Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber. Will retain each onNext up to the given per-item expiry timeout. Completion and Error will also be replayed.

    ttl

    Per-item timeout duration

    returns

    a replaying ConnectableFlux

    Definition Classes
    Flux
  207. final def replay(history: Int): ConnectableFlux[OUT]

    Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber.

    Turn this Flux into a connectable hot source and cache last emitted signals for further Subscriber. Will retain up to the given history size onNext signals. Completion and Error will also be replayed.

    history

    number of events retained in history excluding complete and error

    returns

    a replaying ConnectableFlux

    Definition Classes
    Flux
  208. final def replay(): ConnectableFlux[OUT]

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber.

    Turn this Flux into a hot source and cache last emitted signals for further Subscriber. Will retain an unbounded amount of onNext signals. Completion and Error will also be replayed.

    returns

    a replaying ConnectableFlux

    Definition Classes
    Flux
  209. final def retry(numRetries: Long, retryMatcher: (Throwable) ⇒ Boolean): Flux[OUT]

    Re-subscribes to this Flux sequence up to the specified number of retries if it signals any error and the given Predicate matches otherwise push the error downstream.

    Re-subscribes to this Flux sequence up to the specified number of retries if it signals any error and the given Predicate matches otherwise push the error downstream.

    numRetries

    the number of times to tolerate an error

    retryMatcher

    the predicate to evaluate if retry should occur based on a given error signal

    returns

    a re-subscribing Flux on onError up to the specified number of retries and if the predicate matches.

    Definition Classes
    Flux
  210. final def retry(retryMatcher: (Throwable) ⇒ Boolean): Flux[OUT]

    Re-subscribes to this Flux sequence if it signals any error and the given Predicate matches otherwise push the error downstream.

    Re-subscribes to this Flux sequence if it signals any error and the given Predicate matches otherwise push the error downstream.

    retryMatcher

    the predicate to evaluate if retry should occur based on a given error signal

    returns

    a re-subscribing Flux on onError if the predicates matches.

    Definition Classes
    Flux
  211. final def retry(numRetries: Long): Flux[OUT]

    Re-subscribes to this Flux sequence if it signals any error either indefinitely or a fixed number of times.

    Re-subscribes to this Flux sequence if it signals any error either indefinitely or a fixed number of times.

    The times == Long.MAX_VALUE is treated as infinite retry.

    numRetries

    the number of times to tolerate an error

    returns

    a re-subscribing Flux on onError up to the specified number of retries.

    Definition Classes
    Flux
  212. final def retry(): Flux[OUT]

    Re-subscribes to this Flux sequence if it signals any error either indefinitely.

    Re-subscribes to this Flux sequence if it signals any error either indefinitely.

    The times == Long.MAX_VALUE is treated as infinite retry.

    returns

    a re-subscribing Flux on onError

    Definition Classes
    Flux
  213. final def retryWhen(whenFactory: (Flux[Throwable]) ⇒ Publisher[_]): Flux[OUT]

    Retries this Flux when a companion sequence signals an item in response to this Flux error signal

    Retries this Flux when a companion sequence signals an item in response to this Flux error signal

    If the companion sequence signals when the Flux is active, the retry attempt is suppressed and any terminal signal will terminate the Flux source with the same signal immediately.

    whenFactory

    the Function1 providing a Flux signalling any error from the source sequence and returning a { @link Publisher} companion.

    returns

    a re-subscribing Flux on onError when the companion Publisher produces an onNext signal

    Definition Classes
    Flux
  214. final def sample[U](sampler: Publisher[U]): Flux[OUT]

    Sample this Flux and emit its latest value whenever the sampler Publisher signals a value.

    Sample this Flux and emit its latest value whenever the sampler Publisher signals a value.

    Termination of either Publisher will result in termination for the Subscriber as well.

    Both Publisher will run in unbounded mode because the backpressure would interfere with the sampling precision.

    U

    the type of the sampler sequence

    sampler

    the sampler Publisher

    returns

    a sampled Flux by last item observed when the sampler Publisher signals

    Definition Classes
    Flux
  215. final def sample(timespan: Duration): Flux[OUT]

    Emit latest value for every given period of time.

    Emit latest value for every given period of time.

    timespan

    the duration to emit the latest observed item

    returns

    a sampled Flux by last item over a period of time

    Definition Classes
    Flux
  216. final def sampleFirst[U](samplerFactory: (OUT) ⇒ Publisher[U]): Flux[OUT]

    Take a value from this Flux then use the duration provided by a generated Publisher to skip other values until that sampler Publisher signals.

    Take a value from this Flux then use the duration provided by a generated Publisher to skip other values until that sampler Publisher signals.

    U

    the companion reified type

    samplerFactory

    select a Publisher companion to signal onNext or onComplete to stop excluding others values from this sequence

    returns

    a sampled Flux by last item observed when the sampler signals

    Definition Classes
    Flux
  217. final def sampleFirst(timespan: Duration): Flux[OUT]

    Take a value from this Flux then use the duration provided to skip other values.

    Take a value from this Flux then use the duration provided to skip other values.

    timespan

    the duration to exclude others values from this sequence

    returns

    a sampled Flux by first item over a period of time

    Definition Classes
    Flux
  218. final def sampleTimeout[U](throttlerFactory: (OUT) ⇒ Publisher[U], maxConcurrency: Int): Flux[OUT]

    Emit the last value from this Flux only if there were no newer values emitted during the time window provided by a publisher for that particular last value.

    Emit the last value from this Flux only if there were no newer values emitted during the time window provided by a publisher for that particular last value.

    The provided maxConcurrency will keep a bounded maximum of concurrent timeouts and drop any new items until at least one timeout terminates.

    U

    the throttling type

    throttlerFactory

    select a Publisher companion to signal onNext or onComplete to stop checking others values from this sequence and emit the selecting item

    maxConcurrency

    the maximum number of concurrent timeouts

    returns

    a sampled Flux by last single item observed before a companion Publisher emits

    Definition Classes
    Flux
  219. final def sampleTimeout[U](throttlerFactory: (OUT) ⇒ Publisher[U]): Flux[OUT]

    Emit the last value from this Flux only if there were no new values emitted during the time window provided by a publisher for that particular last value.

    Emit the last value from this Flux only if there were no new values emitted during the time window provided by a publisher for that particular last value.

    U

    the companion reified type

    throttlerFactory

    select a Publisher companion to signal onNext or onComplete to stop checking others values from this sequence and emit the selecting item

    returns

    a sampled Flux by last single item observed before a companion Publisher emits

    Definition Classes
    Flux
  220. final def scan[A](initial: A, accumulator: (A, OUT) ⇒ A): Flux[A]

    Aggregate this Flux values with the help of an accumulator BiFunction and emits the intermediate results.

    Aggregate this Flux values with the help of an accumulator BiFunction and emits the intermediate results.

    The accumulation works as follows:

    
    result[0] = initialValue;
    result[1] = accumulator(result[0], source[0])
    result[2] = accumulator(result[1], source[1])
    result[3] = accumulator(result[2], source[2])
    ...
    
    

    A

    the accumulated type

    initial

    the initial argument to pass to the reduce function

    accumulator

    the accumulating BiFunction

    returns

    an accumulating Flux starting with initial state

    Definition Classes
    Flux
  221. final def scan(accumulator: (OUT, OUT) ⇒ OUT): Flux[OUT]

    Accumulate this Flux values with an accumulator BiFunction and returns the intermediate results of this function.

    Accumulate this Flux values with an accumulator BiFunction and returns the intermediate results of this function.

    Unlike BiFunction), this operator doesn't take an initial value but treats the first Flux value as initial value.
    The accumulation works as follows:

    
    result[0] = accumulator(source[0], source[1])
    result[1] = accumulator(result[0], source[2])
    result[2] = accumulator(result[1], source[3])
    ...
    
    

    accumulator

    the accumulating BiFunction

    returns

    an accumulating Flux

    Definition Classes
    Flux
  222. def scan[T](key: Attr[T]): Option[T]

    Introspect a component's specific state attribute, returning an associated value specific to that component, or the default value associated with the key, or null if the attribute doesn't make sense for that particular component and has no sensible default.

    Introspect a component's specific state attribute, returning an associated value specific to that component, or the default value associated with the key, or null if the attribute doesn't make sense for that particular component and has no sensible default.

    key

    a Attr to resolve for the component.

    returns

    a value associated to the key or None if unmatched or unresolved

    Definition Classes
    Scannable
  223. def scanOrDefault[T](key: Attr[T], defaultValue: T): T

    Introspect a component's specific state attribute.

    Introspect a component's specific state attribute. If there's no specific value in the component for that key, fall back to returning the provided non null default.

    key

    a Attr to resolve for the component.

    defaultValue

    a fallback value if key resolve to { @literal null}

    returns

    a value associated to the key or the provided default if unmatched or unresolved

    Definition Classes
    Scannable
  224. def scanUnsafe(key: Attr[_]): Option[AnyRef]

    This method is used internally by components to define their key-value mappings in a single place.

    This method is used internally by components to define their key-value mappings in a single place. Although it is ignoring the generic type of the Attr key, implementors should take care to return values of the correct type, and return None if no specific value is available.

    For public consumption of attributes, prefer using Scannable.scan(Attr), which will return a typed value and fall back to the key's default if the component didn't define any mapping.

    key

    a { @link Attr} to resolve for the component.

    returns

    the value associated to the key for that specific component, or null if none.

    Definition Classes
    FluxProcessorScannable
  225. final def scanWith[A](initial: () ⇒ A, accumulator: (A, OUT) ⇒ A): Flux[A]

    Aggregate this Flux values with the help of an accumulator BiFunction and emits the intermediate results.

    Aggregate this Flux values with the help of an accumulator BiFunction and emits the intermediate results.

    The accumulation works as follows:

    
    result[0] = initialValue;
    result[1] = accumulator(result[0], source[0])
    result[2] = accumulator(result[1], source[1])
    result[3] = accumulator(result[2], source[2])
    ...
    
    

    A

    the accumulated type

    initial

    the initial supplier to init the first value to pass to the reduce function

    accumulator

    the accumulating BiFunction

    returns

    an accumulating Flux starting with initial state

    Definition Classes
    Flux
  226. final def serialize(): FluxProcessor[IN, OUT]

    Create a FluxProcessor that safely gates multi-threaded producer

    Create a FluxProcessor that safely gates multi-threaded producer

    returns

    a serializing FluxProcessor

  227. def serializeAlways: Boolean

    Returns serialization strategy.

    Returns serialization strategy. If true, FluxProcessor.sink() will always be serialized. Otherwise sink is serialized only if => Unit) is invoked.

    returns

    true to serialize any sink, false to delay serialization till onRequest

    Attributes
    protected
  228. final def share(): Flux[OUT]

    Returns a new Flux that multicasts (shares) the original Flux.

    Returns a new Flux that multicasts (shares) the original Flux. As long as there is at least one Subscriber this Flux will be subscribed and emitting data. When all subscribers have cancelled it will cancel the source Flux.

    This is an alias for Flux.publish.ConnectableFlux.refCount.

    returns

    a Flux that upon first subcribe causes the source Flux to subscribe once only, late subscribers might therefore miss items.

    Definition Classes
    Flux
  229. final def single(defaultValue: OUT): Mono[OUT]

    Expect and emit a single item from this Flux source or signal NoSuchElementException (or a default value) for empty source, IndexOutOfBoundsException for a multi-item source.

    Expect and emit a single item from this Flux source or signal NoSuchElementException (or a default value) for empty source, IndexOutOfBoundsException for a multi-item source.

    defaultValue

    a single fallback item if this { @link Flux} is empty

    returns

    a Mono with the eventual single item or a supplied default value

    Definition Classes
    Flux
  230. final def single(): Mono[OUT]

    Expect and emit a single item from this Flux source or signal NoSuchElementException (or a default generated value) for empty source, IndexOutOfBoundsException for a multi-item source.

    Expect and emit a single item from this Flux source or signal NoSuchElementException (or a default generated value) for empty source, IndexOutOfBoundsException for a multi-item source.

    returns

    a Mono with the eventual single item or an error signal

    Definition Classes
    Flux
  231. final def singleOrEmpty(): Mono[OUT]

    Expect and emit a zero or single item from this Flux source or NoSuchElementException for a multi-item source.

    Expect and emit a zero or single item from this Flux source or NoSuchElementException for a multi-item source.

    returns

    a Mono with the eventual single item or no item

    Definition Classes
    Flux
  232. final def sink(strategy: OverflowStrategy): FluxSink[IN]

    Create a FluxSink that safely gates multi-threaded producer Subscriber.onNext.

    Create a FluxSink that safely gates multi-threaded producer Subscriber.onNext.

    The returned FluxSink will not apply any FluxSink.OverflowStrategy and overflowing FluxSink.next will behave in two possible ways depending on the Processor:

    • an unbounded processor will handle the overflow itself by dropping or buffering
    • a bounded processor will block/spin on IGNORE strategy, or apply the strategy behavior
    strategy

    the overflow strategy, see FluxSink.OverflowStrategy for the available strategies

    returns

    a serializing FluxSink

  233. final def sink(): FluxSink[IN]

    Create a FluxSink that safely gates multi-threaded producer Subscriber.onNext.

    Create a FluxSink that safely gates multi-threaded producer Subscriber.onNext.

    The returned FluxSink will not apply any FluxSink.OverflowStrategy and overflowing FluxSink.next will behave in two possible ways depending on the Processor:

    • an unbounded processor will handle the overflow itself by dropping or buffering
    • a bounded processor will block/spin
    returns

    a serializing FluxSink

  234. final def skip(timespan: Duration, timer: Scheduler): Flux[OUT]

    Skip elements from this Flux for the given time period.

    Skip elements from this Flux for the given time period.

    timespan

    the time window to exclude next signals

    timer

    a time-capable Scheduler instance to run on

    returns

    a dropping Flux until the end of the given timespan

    Definition Classes
    Flux
  235. final def skip(timespan: Duration): Flux[OUT]

    Skip elements from this Flux for the given time period.

    Skip elements from this Flux for the given time period.

    timespan

    the time window to exclude next signals

    returns

    a dropping Flux until the end of the given timespan

    Definition Classes
    Flux
  236. final def skip(skipped: Long): Flux[OUT]

    Skip next the specified number of elements from this Flux.

    Skip next the specified number of elements from this Flux.

    skipped

    the number of times to drop

    returns

    a dropping Flux until the specified skipped number of elements

    Definition Classes
    Flux
  237. final def skipLast(n: Int): Flux[OUT]

    Skip the last specified number of elements from this Flux.

    Skip the last specified number of elements from this Flux.

    n

    the number of elements to ignore before completion

    returns

    a dropping Flux for the specified skipped number of elements before termination

    Definition Classes
    Flux
  238. final def skipUntil(untilPredicate: (OUT) ⇒ Boolean): Flux[OUT]

    Skips values from this Flux until a Predicate returns true for the value.

    Skips values from this Flux until a Predicate returns true for the value. Will include the matched value.

    untilPredicate

    the Predicate evaluating to true to stop skipping.

    returns

    a dropping Flux until the Predicate matches

    Definition Classes
    Flux
  239. final def skipUntilOther(other: Publisher[_]): Flux[OUT]

    Skip values from this Flux until a specified Publisher signals an onNext or onComplete.

    Skip values from this Flux until a specified Publisher signals an onNext or onComplete.

    other

    the Publisher companion to coordinate with to stop skipping

    returns

    a dropping Flux until the other Publisher emits

    Definition Classes
    Flux
  240. final def skipWhile(skipPredicate: (OUT) ⇒ Boolean): Flux[OUT]

    Skips values from this Flux while a Predicate returns true for the value.

    Skips values from this Flux while a Predicate returns true for the value.

    skipPredicate

    the Predicate evaluating to true to keep skipping.

    returns

    a dropping Flux while the Predicate matches

    Definition Classes
    Flux
  241. final def sort(sortFunction: Ordering[OUT]): Flux[OUT]

    Returns a Flux that sorts the events emitted by source Flux given the Ordering function.

    Returns a Flux that sorts the events emitted by source Flux given the Ordering function.

    Note that calling sorted with long, non-terminating or infinite sources might cause OutOfMemoryError

    sortFunction

    a function that compares two items emitted by this Flux that indicates their sort order

    returns

    a sorting Flux

    Definition Classes
    Flux
  242. final def sort(): Flux[OUT]

    Returns a Flux that sorts the events emitted by source Flux.

    Returns a Flux that sorts the events emitted by source Flux. Each item emitted by the Flux must implement Comparable with respect to all other items in the sequence.

    Note that calling sort with long, non-terminating or infinite sources might cause OutOfMemoryError. Use sequence splitting like Flux.windowWhen to sort batches in that case.

    returns

    a sorting Flux

    Definition Classes
    Flux
    Exceptions thrown

    ClassCastException if any item emitted by the Flux does not implement Comparable with respect to all other items emitted by the Flux

  243. final def startWith(publisher: Publisher[_ <: OUT]): Flux[OUT]

    Prepend the given Publisher sequence before this Flux sequence.

    Prepend the given Publisher sequence before this Flux sequence.

    publisher

    the Publisher whose values to prepend

    returns

    a prefixed Flux with given Publisher sequence

    Definition Classes
    Flux
  244. final def startWith(values: OUT*): Flux[OUT]

    Prepend the given values before this Flux sequence.

    Prepend the given values before this Flux sequence.

    values

    the array of values to start with

    returns

    a prefixed Flux with given values

    Definition Classes
    Flux
  245. final def startWith(iterable: Iterable[_ <: OUT]): Flux[OUT]

    Prepend the given Iterable before this Flux sequence.

    Prepend the given Iterable before this Flux sequence.

    iterable

    the sequence of values to start the sequence with

    returns

    a prefixed Flux with given Iterable

    Definition Classes
    Flux
  246. def subscribe(s: Subscriber[_ >: OUT]): Unit
    Definition Classes
    FluxProcessorFlux → Publisher
  247. final def subscribe(consumer: (OUT) ⇒ Unit, errorConsumer: (Throwable) ⇒ Unit, completeConsumer: () ⇒ Unit, subscriptionConsumer: (Subscription) ⇒ Unit): Disposable

    Subscribe consumer to this Flux that will consume all the sequence.

    Subscribe consumer to this Flux that will consume all the sequence. It will let the provided subscriptionConsumer request the adequate amount of data, or request unbounded demand Long.MAX_VALUE if no such consumer is provided.

    For a passive version that observe and forward incoming data see Flux.doOnNext, Flux.doOnError, Flux.doOnComplete and Flux.doOnSubscribe.

    For a version that gives you more control over backpressure and the request, see Flux.subscribe with a reactor.core.publisher.BaseSubscriber.

    consumer

    the consumer to invoke on each value

    errorConsumer

    the consumer to invoke on error signal

    completeConsumer

    the consumer to invoke on complete signal

    subscriptionConsumer

    the consumer to invoke on subscribe signal, to be used for the initial request, or null for max request

    returns

    a new Disposable to dispose the Subscription

    Definition Classes
    Flux
  248. final def subscribe(consumer: (OUT) ⇒ Unit, errorConsumer: (Throwable) ⇒ Unit, completeConsumer: () ⇒ Unit): Disposable

    Subscribe consumer to this Flux that will consume all the sequence.

    Subscribe consumer to this Flux that will consume all the sequence. It will request unbounded demand Long.MAX_VALUE. For a passive version that observe and forward incoming data see Flux.doOnNext, Flux.doOnError and Flux.doOnComplete.

    For a version that gives you more control over backpressure and the request, see Flux.subscribe with a reactor.core.publisher.BaseSubscriber.

    consumer

    the consumer to invoke on each value

    errorConsumer

    the consumer to invoke on error signal

    completeConsumer

    the consumer to invoke on complete signal

    returns

    a new Disposable to dispose the Subscription

    Definition Classes
    Flux
  249. final def subscribe(consumer: (OUT) ⇒ Unit, errorConsumer: (Throwable) ⇒ Unit): Disposable

    Subscribe consumer to this Flux that will consume all the sequence.

    Subscribe consumer to this Flux that will consume all the sequence. It will request unbounded demand Long.MAX_VALUE. For a passive version that observe and forward incoming data see Flux.doOnNext and Flux.doOnError.

    For a version that gives you more control over backpressure and the request, see Flux.subscribe with a reactor.core.publisher.BaseSubscriber.

    consumer

    the consumer to invoke on each next signal

    errorConsumer

    the consumer to invoke on error signal

    returns

    a new Disposable to dispose the Subscription

    Definition Classes
    Flux
  250. final def subscribe(consumer: (OUT) ⇒ Unit): Disposable

    Subscribe a consumer to this Flux that will consume all the sequence.

    Subscribe a consumer to this Flux that will consume all the sequence. It will request an unbounded demand.

    For a passive version that observe and forward incoming data see Flux.doOnNext.

    For a version that gives you more control over backpressure and the request, see Flux.subscribe with a reactor.core.publisher.BaseSubscriber.

    consumer

    the consumer to invoke on each value

    returns

    a new Disposable to dispose the Subscription

    Definition Classes
    Flux
  251. final def subscribe(): Disposable

    Start the chain and request unbounded demand.

    Start the chain and request unbounded demand.

    returns

    a Disposable task to execute to dispose and cancel the underlying Subscription

    Definition Classes
    Flux
  252. final def subscribeOn(scheduler: Scheduler): Flux[OUT]

    Run subscribe, onSubscribe and request on a supplied Subscriber

    Run subscribe, onSubscribe and request on a supplied Subscriber

    Typically used for slow publisher e.g., blocking IO, fast consumer(s) scenarios.

    flux.subscribeOn(Schedulers.single()).subscribe()

    scheduler

    a checked reactor.core.scheduler.Scheduler.Worker factory

    returns

    a Flux requesting asynchronously

    Definition Classes
    Flux
  253. final def subscribeWith[E <: Subscriber[OUT]](subscriber: E): E

    A chaining Publisher.subscribe alternative to inline composition type conversion to a hot emitter (e.g.

    A chaining Publisher.subscribe alternative to inline composition type conversion to a hot emitter (e.g. reactor.core.publisher.FluxProcessor or reactor.core.publisher.MonoProcessor).

    flux.subscribeWith(WorkQueueProcessor.create()).subscribe()

    If you need more control over backpressure and the request, use a reactor.core.publisher.BaseSubscriber.

    E

    the reified type from the input/output subscriber

    subscriber

    the Subscriber to subscribe and return

    returns

    the passed Subscriber

    Definition Classes
    Flux
  254. final def subscriberContext(doOnContext: (Context) ⇒ Context): Flux[OUT]

    Enrich a potentially empty downstream Context by applying a Function1 to it, producing a new Context that is propagated upstream.

    Enrich a potentially empty downstream Context by applying a Function1 to it, producing a new Context that is propagated upstream.

    The Context propagation happens once per subscription (not on each onNext): it is done during the subscribe(Subscriber) phase, which runs from the last operator of a chain towards the first.

    So this operator enriches a Context coming from under it in the chain (downstream, by default an empty one) and passes the new enriched Context to operators above it in the chain (upstream, by way of them using Flux#subscribe(Subscriber,Context)).

    doOnContext

    the function taking a previous Context state and returning a new one.

    returns

    a contextualized Flux

    Definition Classes
    Flux
    See also

    Context

  255. final def subscriberContext(mergeContext: Context): Flux[OUT]

    Enrich a potentially empty downstream Context by adding all values from the given Context, producing a new Context that is propagated upstream.

    Enrich a potentially empty downstream Context by adding all values from the given Context, producing a new Context that is propagated upstream.

    The Context propagation happens once per subscription (not on each onNext): it is done during the subscribe(Subscriber) phase, which runs from the last operator of a chain towards the first.

    So this operator enriches a Context coming from under it in the chain (downstream, by default an empty one) and passes the new enriched Context to operators above it in the chain (upstream, by way of them using Flux#subscribe(Subscriber,Context)).

    mergeContext

    the Context to merge with a previous Context state, returning a new one.

    returns

    a contextualized { @link Flux}

    Definition Classes
    Flux
    See also

    Context

  256. final def switchIfEmpty(alternate: Publisher[_ <: OUT]): Flux[OUT]

    Provide an alternative if this sequence is completed without any data

    Provide an alternative if this sequence is completed without any data

    alternate

    the alternate publisher if this sequence is empty

    returns

    an alternating Flux on source onComplete without elements

    Definition Classes
    Flux
  257. final def switchMap[V](fn: (OUT) ⇒ Publisher[_ <: V], prefetch: Int): Flux[V]

    Switch to a new Publisher generated via a Function whenever this Flux produces an item.

    Switch to a new Publisher generated via a Function whenever this Flux produces an item.

    V

    the type of the return value of the transformation function

    fn

    the transformation function

    prefetch

    the produced demand for inner sources

    returns

    an alternating Flux on source onNext

    Definition Classes
    Flux
  258. final def switchMap[V](fn: (OUT) ⇒ Publisher[_ <: V]): Flux[V]

    Switch to a new Publisher generated via a Function whenever this Flux produces an item.

    Switch to a new Publisher generated via a Function whenever this Flux produces an item.

    V

    the type of the return value of the transformation function

    fn

    the transformation function

    returns

    an alternating Flux on source onNext

    Definition Classes
    Flux
  259. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  260. final def tag(key: String, value: String): Flux[OUT]

    Tag this flux with a key/value pair.

    Tag this flux with a key/value pair. These can be retrieved as a Stream of all tags throughout the publisher chain by using reactor.core.scala.Scannable.tags() (as traversed by reactor.core.scala.Scannable.parents()).

    key

    a tag key

    value

    a tag value

    returns

    the same sequence, but bearing tags

    Definition Classes
    Flux
  261. def tags: Stream[(String, String)]

    Visit this Scannable and its Scannable.parents() and stream all the observed tags

    Visit this Scannable and its Scannable.parents() and stream all the observed tags

    returns

    the stream of tags for this Scannable and its parents

    Definition Classes
    Scannable
  262. final def take(timespan: Duration, timer: Scheduler): Flux[OUT]

    Relay values from this Flux until the given time period elapses.

    Relay values from this Flux until the given time period elapses.

    If the time period is zero, the Subscriber gets completed if this Flux completes, signals an error or signals its first value (which is not not relayed though).

    timespan

    the time window of items to emit from this Flux

    timer

    a time-capable Scheduler instance to run on

    returns

    a time limited Flux

    Definition Classes
    Flux
  263. final def take(timespan: Duration): Flux[OUT]

    Relay values from this Flux until the given time period elapses.

    Relay values from this Flux until the given time period elapses.

    If the time period is zero, the Subscriber gets completed if this Flux completes, signals an error or signals its first value (which is not not relayed though).

    timespan

    the time window of items to emit from this Flux

    returns

    a time limited Flux

    Definition Classes
    Flux
  264. final def take(n: Long): Flux[OUT]

    Take only the first N values from this Flux.

    Take only the first N values from this Flux.

    If N is zero, the Subscriber gets completed if this Flux completes, signals an error or signals its first value (which is not not relayed though).

    n

    the number of items to emit from this Flux

    returns

    a size limited Flux

    Definition Classes
    Flux
  265. final def takeLast(n: Int): Flux[OUT]

    Emit the last N values this Flux emitted before its completion.

    Emit the last N values this Flux emitted before its completion.

    n

    the number of items from this Flux to retain and emit on onComplete

    returns

    a terminating Flux sub-sequence

    Definition Classes
    Flux
  266. final def takeUntil(predicate: (OUT) ⇒ Boolean): Flux[OUT]

    Relay values from this Flux until the given Predicate matches.

    Relay values from this Flux until the given Predicate matches. Unlike Flux.takeWhile, this will include the matched data.

    predicate

    the Predicate to signal when to stop replaying signal from this Flux

    returns

    an eventually limited Flux

    Definition Classes
    Flux
  267. final def takeUntilOther(other: Publisher[_]): Flux[OUT]

    Relay values from this Flux until the given Publisher emits.

    Relay values from this Flux until the given Publisher emits.

    other

    the Publisher to signal when to stop replaying signal from this Flux

    returns

    an eventually limited Flux

    Definition Classes
    Flux
  268. final def takeWhile(continuePredicate: (OUT) ⇒ Boolean): Flux[OUT]

    Relay values while a predicate returns True for the values (checked before each value is delivered).

    Relay values while a predicate returns True for the values (checked before each value is delivered). Unlike Flux.takeUntil, this will exclude the matched data.

    continuePredicate

    the Predicate invoked each onNext returning False to terminate

    returns

    an eventually limited Flux

    Definition Classes
    Flux
  269. final def then(): Mono[Unit]

    Return a Mono[Unit] that completes when this Flux completes.

    Return a Mono[Unit] that completes when this Flux completes. This will actively ignore the sequence and only replay completion or error signals.

    returns

    a new Mono

    Definition Classes
    Flux
  270. final def thenEmpty(other: Publisher[Unit]): Mono[Unit]

    Return a Mono[Unit] that waits for this Flux to complete then for a supplied Publisher[Unit] to also complete.

    Return a Mono[Unit] that waits for this Flux to complete then for a supplied Publisher[Unit] to also complete. The second completion signal is replayed, or any error signal that occurs instead.

    other

    a Publisher to wait for after this Flux's termination

    returns

    a new Mono completing when both publishers have completed in sequence

    Definition Classes
    Flux
  271. final def thenMany[V](other: Publisher[V]): Flux[V]

    Return a Flux that emits the sequence of the supplied Publisher after this Flux completes, ignoring this flux elements.

    Return a Flux that emits the sequence of the supplied Publisher after this Flux completes, ignoring this flux elements. If an error occurs it immediately terminates the resulting flux.

    V

    the supplied produced type

    other

    a Publisher to emit from after termination

    returns

    a new Flux emitting eventually from the supplied Publisher

    Definition Classes
    Flux
  272. final def timeout[U, V](firstTimeout: Publisher[U], nextTimeoutFactory: (OUT) ⇒ Publisher[V], fallback: Publisher[_ <: OUT]): Flux[OUT]

    Switch to a fallback Publisher in case a first item from this Flux has not been emitted before the given Publisher emits.

    Switch to a fallback Publisher in case a first item from this Flux has not been emitted before the given Publisher emits. The following items will be individually timed via the factory provided Publisher.

    U

    the type of the elements of the first timeout Publisher

    V

    the type of the elements of the subsequent timeout Publishers

    firstTimeout

    the timeout Publisher that must not emit before the first signal from this { @link Flux}

    nextTimeoutFactory

    the timeout Publisher factory for each next item

    fallback

    the fallback Publisher to subscribe when a timeout occurs

    returns

    a first then per-item expirable Flux with a fallback Publisher

    Definition Classes
    Flux
  273. final def timeout[U, V](firstTimeout: Publisher[U], nextTimeoutFactory: (OUT) ⇒ Publisher[V]): Flux[OUT]

    Signal a java.util.concurrent.TimeoutException in case a first item from this Flux has not been emitted before the given Publisher emits.

    Signal a java.util.concurrent.TimeoutException in case a first item from this Flux has not been emitted before the given Publisher emits. The following items will be individually timed via the factory provided Publisher.

    U

    the type of the elements of the first timeout Publisher

    V

    the type of the elements of the subsequent timeout Publishers

    firstTimeout

    the timeout Publisher that must not emit before the first signal from this Flux

    nextTimeoutFactory

    the timeout Publisher factory for each next item

    returns

    a first then per-item expirable Flux

    Definition Classes
    Flux
  274. final def timeout[U](firstTimeout: Publisher[U]): Flux[OUT]

    Signal a java.util.concurrent.TimeoutException in case a first item from this Flux has not been emitted before the given Publisher emits.

    Signal a java.util.concurrent.TimeoutException in case a first item from this Flux has not been emitted before the given Publisher emits.

    U

    the type of the timeout Publisher

    firstTimeout

    the timeout Publisher that must not emit before the first signal from this Flux

    returns

    an expirable Flux if the first item does not come before a Publisher signal

    Definition Classes
    Flux
  275. final def timeout(timeout: Duration, fallback: Option[Publisher[_ <: OUT]]): Flux[OUT]

    Switch to a fallback Publisher in case a per-item period fires before the next item arrives from this Flux.

    Switch to a fallback Publisher in case a per-item period fires before the next item arrives from this Flux.

    If the given Publisher is None, signal a java.util.concurrent.TimeoutException.

    timeout

    the timeout between two signals from this { @link Flux}

    fallback

    the optional fallback Publisher to subscribe when a timeout occurs

    returns

    a per-item expirable Flux with a fallback Publisher

    Definition Classes
    Flux
  276. final def timeout(timeout: Duration): Flux[OUT]

    Signal a java.util.concurrent.TimeoutException in case a per-item period fires before the next item arrives from this Flux.

    Signal a java.util.concurrent.TimeoutException in case a per-item period fires before the next item arrives from this Flux.

    timeout

    the timeout between two signals from this Flux

    returns

    a per-item expirable Flux

    Definition Classes
    Flux
  277. final def timestamp(scheduler: Scheduler): Flux[(Long, OUT)]

    Emit a Tuple2 pair of T1 Long current system time in millis and T2 T associated data for each item from this Flux

    Emit a Tuple2 pair of T1 Long current system time in millis and T2 T associated data for each item from this Flux

    scheduler

    the Scheduler to read time from

    returns

    a timestamped Flux

    Definition Classes
    Flux
  278. final def timestamp(): Flux[(Long, OUT)]

    Emit a Tuple2 pair of T1 Long current system time in millis and T2 T associated data for each item from this Flux

    Emit a Tuple2 pair of T1 Long current system time in millis and T2 T associated data for each item from this Flux

    returns

    a timestamped Flux

    Definition Classes
    Flux
  279. final def toIterable(batchSize: Int, queueProvider: Option[Supplier[Queue[OUT]]]): Iterable[OUT]

    Transform this Flux into a lazy Iterable blocking on next calls.

    Transform this Flux into a lazy Iterable blocking on next calls.

    batchSize

    the bounded capacity to produce to this Flux or Int.MaxValue for unbounded

    queueProvider

    the optional supplier of the queue implementation to be used for transferring elements across threads. The supplier of queue can easily be obtained using reactor.util.concurrent.QueueSupplier.get

    returns

    a blocking Iterable

    Definition Classes
    Flux
  280. final def toIterable(batchSize: Int): Iterable[OUT]

    Transform this Flux into a lazy Iterable blocking on next calls.

    Transform this Flux into a lazy Iterable blocking on next calls.

    batchSize

    the bounded capacity to produce to this Flux or Int.MaxValue for unbounded

    returns

    a blocking Iterable

    Definition Classes
    Flux
  281. final def toIterable(): Iterable[OUT]

    Transform this Flux into a lazy Iterable blocking on next calls.

    Transform this Flux into a lazy Iterable blocking on next calls.

    returns

    a blocking Iterable

    Definition Classes
    Flux
  282. final def toStream(batchSize: Int): Stream[OUT]

    Transform this Flux into a lazy Stream blocking on next calls.

    Transform this Flux into a lazy Stream blocking on next calls.

    batchSize

    the bounded capacity to produce to this Flux or Int.MaxValue for unbounded

    returns

    a Stream of unknown size with onClose attached to Subscription.cancel

    Definition Classes
    Flux
  283. final def toStream(): Stream[OUT]

    Transform this Flux into a lazy Stream blocking on next calls.

    Transform this Flux into a lazy Stream blocking on next calls.

    returns

    a of unknown size with onClose attached to Subscription.cancel

    Definition Classes
    Flux
  284. def toString(): String
    Definition Classes
    AnyRef → Any
  285. final def transform[V](transformer: (Flux[OUT]) ⇒ Publisher[V]): Flux[V]

    Transform this Flux in order to generate a target Flux.

    Transform this Flux in order to generate a target Flux. Unlike Flux.compose, the provided function is executed as part of assembly.

    V

    the item type in the returned Flux

    transformer

    the Function1 to immediately map this Flux into a target Flux instance.

    returns

    a new Flux

    Definition Classes
    Flux
    Example:
    1. val applySchedulers = flux => flux.subscribeOn(Schedulers.elastic()).publishOn(Schedulers.parallel());
      flux.transform(applySchedulers).map(v => v * v).subscribe()
    See also

    Flux.compose for deferred composition of Flux for each Subscriber

    Flux.as for a loose conversion to an arbitrary type

  286. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  287. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  288. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @throws( ... )
  289. final def window(timespan: Duration, timeshift: Duration, timer: Scheduler): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux delimited by the given timeshift period, starting from the first item.

    Split this Flux sequence into multiple Flux delimited by the given timeshift period, starting from the first item. Each Flux bucket will onComplete after timespan period has elpased.

    When timeshift > timespan : dropping windows

    When timeshift < timespan : overlapping windows

    When timeshift == timespan : exact windows

    timespan

    the maximum Flux window duration in milliseconds

    timeshift

    the period of time in milliseconds to create new Flux windows

    timer

    the Scheduler to run on

    returns

    a windowing Flux of Flux buckets delimited by an opening Publisher and a selected closing Publisher

    Definition Classes
    Flux
  290. final def window(timespan: Duration, timer: Scheduler): Flux[Flux[OUT]]

    Split this Flux sequence into continuous, non-overlapping windows delimited by a given period.

    Split this Flux sequence into continuous, non-overlapping windows delimited by a given period.

    timespan

    the Duration to delimit Flux windows

    timer

    a time-capable Scheduler instance to run on

    returns

    a windowing Flux of timed Flux buckets

    Definition Classes
    Flux
  291. final def window(timespan: Duration, timeshift: Duration): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux delimited by the given timeshift period, starting from the first item.

    Split this Flux sequence into multiple Flux delimited by the given timeshift period, starting from the first item. Each Flux bucket will onComplete after timespan period has elpased.

    When timeshift > timespan : dropping windows

    When timeshift < timespan : overlapping windows

    When timeshift == timespan : exact windows

    timespan

    the maximum Flux window duration

    timeshift

    the period of time to create new Flux windows

    returns

    a windowing Flux of Flux buckets delimited by an opening Publisher and a selected closing Publisher

    Definition Classes
    Flux
  292. final def window(timespan: Duration): Flux[Flux[OUT]]

    Split this Flux sequence into continuous, non-overlapping windows delimited by a given period.

    Split this Flux sequence into continuous, non-overlapping windows delimited by a given period.

    timespan

    the duration to delimit Flux windows

    returns

    a windowing Flux of timed Flux buckets

    Definition Classes
    Flux
  293. final def window(boundary: Publisher[_]): Flux[Flux[OUT]]

    Split this Flux sequence into continuous, non-overlapping windows where the window boundary is signalled by another Publisher

    Split this Flux sequence into continuous, non-overlapping windows where the window boundary is signalled by another Publisher

    boundary

    a Publisher to emit any item for a split signal and complete to terminate

    returns

    a windowing Flux delimiting its sub-sequences by a given Publisher

    Definition Classes
    Flux
  294. final def window(maxSize: Int, skip: Int): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux delimited by the given skip count, starting from the first item.

    Split this Flux sequence into multiple Flux delimited by the given skip count, starting from the first item. Each Flux bucket will onComplete after maxSize items have been routed.

    When skip > maxSize : dropping windows

    When maxSize < skip : overlapping windows

    When skip == maxSize : exact windows

    maxSize

    the maximum routed items per Flux

    skip

    the number of items to count before emitting a new bucket Flux

    returns

    a windowing Flux of sized Flux buckets every skip count

    Definition Classes
    Flux
  295. final def window(maxSize: Int): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux delimited by the given maxSize count and starting from the first item.

    Split this Flux sequence into multiple Flux delimited by the given maxSize count and starting from the first item. Each Flux bucket will onComplete after maxSize items have been routed.

    maxSize

    the maximum routed items before emitting onComplete per Flux bucket

    returns

    a windowing Flux of sized Flux buckets

    Definition Classes
    Flux
  296. final def windowTimeout(maxSize: Int, timespan: Duration, timer: Scheduler): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux delimited by the given maxSize number of items, starting from the first item.

    Split this Flux sequence into multiple Flux delimited by the given maxSize number of items, starting from the first item. Flux windows will onComplete after a given timespan occurs and the number of items has not be counted.

    maxSize

    the maximum Flux window items to count before onComplete

    timespan

    the timeout to use to onComplete a given window if size is not counted yet

    timer

    the Scheduler to run on

    returns

    a windowing Flux of sized or timed Flux buckets

    Definition Classes
    Flux
  297. final def windowTimeout(maxSize: Int, timespan: Duration): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux delimited by the given maxSize number of items, starting from the first item.

    Split this Flux sequence into multiple Flux delimited by the given maxSize number of items, starting from the first item. Flux windows will onComplete after a given timespan occurs and the number of items has not be counted.

    maxSize

    the maximum Flux window items to count before onComplete

    timespan

    the timeout to use to onComplete a given window if size is not counted yet

    returns

    a windowing Flux of sized or timed Flux buckets

    Definition Classes
    Flux
  298. final def windowUntil(boundaryTrigger: (OUT) ⇒ Boolean, cutBefore: Boolean, prefetch: Int): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux windows delimited by the given predicate and using a prefetch.

    Split this Flux sequence into multiple Flux windows delimited by the given predicate and using a prefetch. A new window is opened each time the predicate returns true.

    If cutBefore is true, the old window will onComplete and the triggering element will be emitted in the new window. Note it can mean that an empty window is sometimes emitted, eg. if the first element in the sequence immediately matches the predicate.

    Otherwise, the triggering element will be emitted in the old window before it does onComplete, similar to Flux.windowUntil(Predicate).

    boundaryTrigger

    a predicate that triggers the next window when it becomes true.

    cutBefore

    push to true to include the triggering element in the new window rather than the old.

    prefetch

    the request size to use for this Flux.

    returns

    a Flux of Flux windows, bounded depending on the predicate.

    Definition Classes
    Flux
  299. final def windowUntil(boundaryTrigger: (OUT) ⇒ Boolean, cutBefore: Boolean): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux windows delimited by the given predicate.

    Split this Flux sequence into multiple Flux windows delimited by the given predicate. A new window is opened each time the predicate returns true.

    If cutBefore is true, the old window will onComplete and the triggering element will be emitted in the new window. Note it can mean that an empty window is sometimes emitted, eg. if the first element in the sequence immediately matches the predicate.

    Otherwise, the triggering element will be emitted in the old window before it does onComplete, similar to Flux.windowUntil(Predicate).

    boundaryTrigger

    a predicate that triggers the next window when it becomes true.

    cutBefore

    push to true to include the triggering element in the new window rather than the old.

    returns

    a Flux of Flux windows, bounded depending on the predicate.

    Definition Classes
    Flux
  300. final def windowUntil(boundaryTrigger: (OUT) ⇒ Boolean): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux windows delimited by the given predicate.

    Split this Flux sequence into multiple Flux windows delimited by the given predicate. A new window is opened each time the predicate returns true, at which point the previous window will receive the triggering element then onComplete.

    boundaryTrigger

    a predicate that triggers the next window when it becomes true.

    returns

    a Flux of Flux windows, bounded depending on the predicate.

    Definition Classes
    Flux
  301. final def windowWhen[U, V](bucketOpening: Publisher[U], closeSelector: (U) ⇒ Publisher[V]): Flux[Flux[OUT]]

    Split this Flux sequence into potentially overlapping windows controlled by items of a start Publisher and end Publisher derived from the start values.

    Split this Flux sequence into potentially overlapping windows controlled by items of a start Publisher and end Publisher derived from the start values.

    When Open signal is strictly not overlapping Close signal : dropping windows

    When Open signal is strictly more frequent than Close signal : overlapping windows

    When Open signal is exactly coordinated with Close signal : exact windows

    U

    the type of the sequence opening windows

    V

    the type of the sequence closing windows opened by the bucketOpening Publisher's elements

    bucketOpening

    a Publisher to emit any item for a split signal and complete to terminate

    closeSelector

    a Function given an opening signal and returning a Publisher that emits to complete the window

    returns

    a windowing Flux delimiting its sub-sequences by a given Publisher and lasting until a selected Publisher emits

    Definition Classes
    Flux
  302. final def windowWhile(inclusionPredicate: (OUT) ⇒ Boolean, prefetch: Int): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux windows that stay open while a given predicate matches the source elements.

    Split this Flux sequence into multiple Flux windows that stay open while a given predicate matches the source elements. Once the predicate returns false, the window closes with an onComplete and the triggering element is discarded.

    Note that for a sequence starting with a separator, or having several subsequent separators anywhere in the sequence, each occurrence will lead to an empty window.

    inclusionPredicate

    a predicate that triggers the next window when it becomes false.

    prefetch

    the request size to use for this Flux.

    returns

    a Flux of Flux windows, each containing subsequent elements that all passed a predicate.

    Definition Classes
    Flux
  303. final def windowWhile(inclusionPredicate: (OUT) ⇒ Boolean): Flux[Flux[OUT]]

    Split this Flux sequence into multiple Flux windows that stay open while a given predicate matches the source elements.

    Split this Flux sequence into multiple Flux windows that stay open while a given predicate matches the source elements. Once the predicate returns false, the window closes with an onComplete and the triggering element is discarded.

    Note that for a sequence starting with a separator, or having several subsequent separators anywhere in the sequence, each occurrence will lead to an empty window.

    inclusionPredicate

    a predicate that triggers the next window when it becomes false.

    returns

    a Flux of Flux windows, each containing subsequent elements that all passed a predicate.

    Definition Classes
    Flux
  304. final def withLatestFrom[U, R](other: Publisher[_ <: U], resultSelector: Function2[OUT, U, _ <: R]): Flux[R]

    Combine values from this Flux with values from another Publisher through a BiFunction and emits the result.

    Combine values from this Flux with values from another Publisher through a BiFunction and emits the result.

    The operator will drop values from this Flux until the other Publisher produces any value.

    If the other Publisher completes without any value, the sequence is completed.

    U

    the other Publisher sequence type

    R

    the result type

    other

    the Publisher to combine with

    resultSelector

    the bi-function called with each pair of source and other elements that should return a single value to be emitted

    returns

    a combined Flux gated by another Publisher

    Definition Classes
    Flux
  305. final def zipWith[T2](source2: Publisher[_ <: T2], prefetch: Int): Flux[(OUT, T2)]

    "Step-Merge" especially useful in Scatter-Gather scenarios.

    "Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

    T2

    type of the value from source2

    source2

    The second upstream Publisher to subscribe to.

    prefetch

    the request size to use for this Flux and the other Publisher

    returns

    a zipped Flux

    Definition Classes
    Flux
  306. final def zipWith[T2, V](source2: Publisher[_ <: T2], prefetch: Int, combinator: (OUT, T2) ⇒ V): Flux[V]

    "Step-Merge" especially useful in Scatter-Gather scenarios.

    "Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations produced by the passed combinator from the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

    T2

    type of the value from source2

    V

    The produced output after transformation by the combinator

    source2

    The second upstream Publisher to subscribe to.

    prefetch

    the request size to use for this Flux and the other Publisher

    combinator

    The aggregate function that will receive a unique value from each upstream and return the value to signal downstream

    returns

    a zipped Flux

    Definition Classes
    Flux
  307. final def zipWith[T2, V](source2: Publisher[_ <: T2], combinator: (OUT, T2) ⇒ V): Flux[V]

    "Step-Merge" especially useful in Scatter-Gather scenarios.

    "Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations produced by the passed combinator from the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

    T2

    type of the value from source2

    V

    The produced output after transformation by the combinator

    source2

    The second upstream Publisher to subscribe to.

    combinator

    The aggregate function that will receive a unique value from each upstream and return the value to signal downstream

    returns

    a zipped Flux

    Definition Classes
    Flux
  308. final def zipWith[T2](source2: Publisher[_ <: T2]): Flux[(OUT, T2)]

    "Step-Merge" especially useful in Scatter-Gather scenarios.

    "Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

    T2

    type of the value from source2

    source2

    The second upstream Publisher to subscribe to.

    returns

    a zipped Flux

    Definition Classes
    Flux
  309. final def zipWithIterable[T2, V](iterable: Iterable[_ <: T2], zipper: Function2[OUT, T2, _ <: V]): Flux[V]

    Pairwise combines elements of this Flux and an Iterable sequence using the given zipper BiFunction.

    Pairwise combines elements of this Flux and an Iterable sequence using the given zipper BiFunction.

    T2

    the value type of the other iterable sequence

    V

    the result type

    iterable

    the Iterable to pair with

    zipper

    the BiFunction combinator

    returns

    a zipped Flux

    Definition Classes
    Flux
  310. final def zipWithIterable[T2](iterable: Iterable[_ <: T2]): Flux[(OUT, T2)]

    Pairwise combines as Tuple2 elements of this Flux and an Iterable sequence.

    Pairwise combines as Tuple2 elements of this Flux and an Iterable sequence.

    T2

    the value type of the other iterable sequence

    iterable

    the Iterable to pair with

    returns

    a zipped Flux

    Definition Classes
    Flux
  311. final def zipWithTimeSinceSubscribe(): Flux[(OUT, Long)]
    Definition Classes
    Flux

Inherited from Disposable

Inherited from Processor[IN, OUT]

Inherited from Subscriber[IN]

Inherited from Flux[OUT]

Inherited from Scannable

Inherited from Filter[OUT]

Inherited from FluxLike[OUT]

Inherited from OnErrorReturn[OUT]

Inherited from MapablePublisher[OUT]

Inherited from Publisher[OUT]

Inherited from AnyRef

Inherited from Any

Ungrouped