object Flux
- Alphabetic
- By Inheritance
- Flux
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Value Members
-
final
def
!=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
final
def
##(): Int
- Definition Classes
- AnyRef → Any
-
final
def
==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- def apply[T](jFlux: publisher.Flux[T]): Flux[T]
-
final
def
asInstanceOf[T0]: T0
- Definition Classes
- Any
-
def
clone(): AnyRef
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
def
combineLatest[T, V](sources: Iterable[Publisher[T]], prefetch: Int, combinator: (Array[AnyRef]) ⇒ V): Flux[V]
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
- T
The common base type of the source sequences
- V
The produced output after transformation by the given combinator
- sources
The list of upstream Publisher to subscribe to.
- prefetch
demand produced to each combined source Publisher
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a Flux based on the produced value
-
def
combineLatest[T, V](sources: Iterable[Publisher[T]], combinator: (Array[AnyRef]) ⇒ V): Flux[V]
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
- T
The common base type of the source sequences
- V
The produced output after transformation by the given combinator
- sources
The list of upstream Publisher to subscribe to.
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a Flux based on the produced value
-
def
combineLatest[T1, T2, T3, T4, T5, T6, V](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], source3: Publisher[_ <: T3], source4: Publisher[_ <: T4], source5: Publisher[_ <: T5], source6: Publisher[_ <: T6], combinator: (Array[AnyRef]) ⇒ V): Flux[V]
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
- T1
type of the value from source1
- T2
type of the value from source2
- T3
type of the value from source3
- T4
type of the value from source4
- T5
type of the value from source5
- V
The produced output after transformation by the given combinator
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- source3
The third upstream Publisher to subscribe to.
- source4
The fourth upstream Publisher to subscribe to.
- source5
The fifth upstream Publisher to subscribe to.
- source6
The sixth upstream Publisher to subscribe to.
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a Flux based on the produced value
-
def
combineLatest[T1, T2, T3, T4, T5, V](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], source3: Publisher[_ <: T3], source4: Publisher[_ <: T4], source5: Publisher[_ <: T5], combinator: (Array[AnyRef]) ⇒ V): Flux[V]
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
- T1
type of the value from source1
- T2
type of the value from source2
- T3
type of the value from source3
- T4
type of the value from source4
- T5
type of the value from source5
- V
The produced output after transformation by the given combinator
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- source3
The third upstream Publisher to subscribe to.
- source4
The fourth upstream Publisher to subscribe to.
- source5
The fifth upstream Publisher to subscribe to.
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a Flux based on the produced value
-
def
combineLatest[T1, T2, T3, T4, V](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], source3: Publisher[_ <: T3], source4: Publisher[_ <: T4], combinator: (Array[AnyRef]) ⇒ V): Flux[V]
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
- T1
type of the value from source1
- T2
type of the value from source2
- T3
type of the value from source3
- T4
type of the value from source4
- V
The produced output after transformation by the given combinator
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- source3
The third upstream Publisher to subscribe to.
- source4
The fourth upstream Publisher to subscribe to.
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a Flux based on the produced value
-
def
combineLatest[T1, T2, T3, V](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], source3: Publisher[_ <: T3], combinator: (Array[AnyRef]) ⇒ V): Flux[V]
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
- T1
type of the value from source1
- T2
type of the value from source2
- T3
type of the value from source3
- V
The produced output after transformation by the given combinator
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- source3
The third upstream Publisher to subscribe to.
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a Flux based on the produced value
-
def
combineLatest[T1, T2, V](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], combinator: (T1, T2) ⇒ V): Flux[V]
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
- T1
type of the value from source1
- T2
type of the value from source2
- V
The produced output after transformation by the given combinator
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a Flux based on the produced value
-
def
combineLatest[T, V](combinator: (Array[AnyRef]) ⇒ V, prefetch: Int, sources: Publisher[_ <: T]*): Flux[V]
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
- T
type of the value from sources
- V
The produced output after transformation by the given combinator
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- prefetch
demand produced to each combined source Publisher
- sources
The upstreams Publisher to subscribe to.
- returns
a Flux based on the produced combinations
-
def
combineLatest[T, V](combinator: (Array[AnyRef]) ⇒ V, sources: Publisher[_ <: T]*): Flux[V]
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
Build a Flux whose data are generated by the combination of the most recent published values from all publishers.
- T
type of the value from sources
- V
The produced output after transformation by the given combinator
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- sources
The upstreams Publisher to subscribe to.
- returns
a Flux based on the produced combinations
-
def
concat[T](sources: Publisher[T]*): Flux[T]
Concat all sources pulled from the given Publisher array.
Concat all sources pulled from the given Publisher array. A complete signal from each source will delimit the individual sequences and will be eventually passed to the returned Publisher.

- T
The source type of the data sequence
- sources
The array of Publisher to concat
- returns
a new Flux concatenating all source sequences
-
def
concat[T](sources: Publisher[Publisher[T]], prefetch: Int): Flux[T]
Concat all sources emitted as an onNext signal from a parent Publisher.
Concat all sources emitted as an onNext signal from a parent Publisher. A complete signal from each source will delimit the individual sequences and will be eventually passed to the returned Publisher which will stop listening if the main sequence has also completed.

- T
The source type of the data sequence
- sources
The Publisher of Publisher to concat
- prefetch
the inner source request size
- returns
a new Flux concatenating all inner sources sequences until complete or error
-
def
concat[T](sources: Publisher[Publisher[T]]): Flux[T]
Concat all sources emitted as an onNext signal from a parent Publisher.
Concat all sources emitted as an onNext signal from a parent Publisher. A complete signal from each source will delimit the individual sequences and will be eventually passed to the returned Publisher which will stop listening if the main sequence has also completed.

- T
The source type of the data sequence
- sources
The Publisher of Publisher to concat
- returns
a new Flux concatenating all inner sources sequences until complete or error
-
def
concat[T](sources: Iterable[Publisher[T]]): Flux[T]
Concat all sources pulled from the supplied Iterator on Publisher.subscribe from the passed Iterable until Iterator.hasNext returns false.
Concat all sources pulled from the supplied Iterator on Publisher.subscribe from the passed Iterable until Iterator.hasNext returns false. A complete signal from each source will delimit the individual sequences and will be eventually passed to the returned Publisher.
- T
The source type of the data sequence
- sources
The Publisher of Publisher to concat
- returns
a new Flux concatenating all source sequences
-
def
concatDelayError[T](sources: Publisher[T]*): Flux[T]
Concat all sources pulled from the given Publisher array.
Concat all sources pulled from the given Publisher array. A complete signal from each source will delimit the individual sequences and will be eventually passed to the returned Publisher. Any error will be delayed until all sources have been concatenated.

- T
The source type of the data sequence
- sources
The Publisher of Publisher to concat
- returns
a new Flux concatenating all source sequences
-
def
concatDelayError[T](sources: Publisher[Publisher[T]], delayUntilEnd: Boolean, prefetch: Int): Flux[T]
Concat all sources emitted as an onNext signal from a parent Publisher.
Concat all sources emitted as an onNext signal from a parent Publisher. A complete signal from each source will delimit the individual sequences and will be eventually passed to the returned Publisher which will stop listening if the main sequence has also completed.
Errors will be delayed after the current concat backlog if delayUntilEnd is false or after all sources if delayUntilEnd is true.

- T
The source type of the data sequence
- sources
The Publisher of Publisher to concat
- delayUntilEnd
delay error until all sources have been consumed instead of after the current source
- prefetch
the inner source request size
- returns
a new Flux concatenating all inner sources sequences until complete or error
-
def
concatDelayError[T](sources: Publisher[Publisher[T]], prefetch: Int): Flux[T]
Concat all sources emitted as an onNext signal from a parent Publisher.
Concat all sources emitted as an onNext signal from a parent Publisher. A complete signal from each source will delimit the individual sequences and will be eventually passed to the returned Publisher which will stop listening if the main sequence has also completed.

- T
The source type of the data sequence
- sources
The Publisher of Publisher to concat
- prefetch
the inner source request size
- returns
a new Flux concatenating all inner sources sequences until complete or error
-
def
concatDelayError[T](sources: Publisher[Publisher[T]]): Flux[T]
Concat all sources emitted as an onNext signal from a parent Publisher.
Concat all sources emitted as an onNext signal from a parent Publisher. A complete signal from each source will delimit the individual sequences and will be eventually passed to the returned Publisher which will stop listening if the main sequence has also completed.

- T
The source type of the data sequence
- sources
The Publisher of Publisher to concat
- returns
a new Flux concatenating all inner sources sequences until complete or error
-
def
create[T](emitter: (FluxSink[T]) ⇒ Unit, backpressure: OverflowStrategy): Flux[T]
Creates a Flux with multi-emission capabilities (synchronous or asynchronous) through the FluxSink API.
Creates a Flux with multi-emission capabilities (synchronous or asynchronous) through the FluxSink API.
This Flux factory is useful if one wants to adapt some other a multi-valued async API and not worry about cancellation and backpressure. For example:
Flux.create[String](emitter => { ActionListener al = e => { emitter.next(textField.getText()) } // without cancellation support: button.addActionListener(al) // with cancellation support: button.addActionListener(al) emitter.setCancellation(() => { button.removeListener(al) }) }, FluxSink.OverflowStrategy.LATEST);- T
the value type
- emitter
the consumer that will receive a FluxSink for each individual Subscriber.
- backpressure
the backpressure mode, see { @link OverflowStrategy} for the available backpressure modes
- returns
a Flux
-
def
create[T](emitter: (FluxSink[T]) ⇒ Unit): Flux[T]
Creates a Flux with multi-emission capabilities (synchronous or asynchronous) through the FluxSink API.
Creates a Flux with multi-emission capabilities (synchronous or asynchronous) through the FluxSink API.
This Flux factory is useful if one wants to adapt some other a multi-valued async API and not worry about cancellation and backpressure. For example:
Handles backpressure by buffering all signals if the downstream can't keep up.
Flux.String>create(emitter -> { ActionListener al = e -> { emitter.next(textField.getText()); }; // without cancellation support: button.addActionListener(al); // with cancellation support: button.addActionListener(al); emitter.setCancellation(() -> { button.removeListener(al); }); });- T
the value type
- emitter
the consumer that will receive a FluxSink for each individual Subscriber.
- returns
a Flux
-
def
defer[T](supplier: () ⇒ Publisher[T]): Flux[T]
Supply a Publisher everytime subscribe is called on the returned flux.
Supply a Publisher everytime subscribe is called on the returned flux. The passed scala.Function1[Unit,Publisher[T]] will be invoked and it's up to the developer to choose to return a new instance of a Publisher or reuse one effectively behaving like Flux.from
-
def
empty[T]: Flux[T]
Create a Flux that completes without emitting any item.
-
final
def
eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
equals(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
-
def
error[O](throwable: Throwable, whenRequested: Boolean): Flux[O]
Build a Flux that will only emit an error signal to any new subscriber.
-
def
error[T](error: Throwable): Flux[T]
Create a Flux that completes with the specified error.
-
def
finalize(): Unit
- Attributes
- protected[java.lang]
- Definition Classes
- AnyRef
- Annotations
- @throws( classOf[java.lang.Throwable] )
-
def
first[I](sources: Iterable[Publisher[_ <: I]]): Flux[I]
Pick the first Publisher to emit any signal (onNext/onError/onComplete) and replay all signals from that Publisher, effectively behaving like the fastest of these competing sources.
Pick the first Publisher to emit any signal (onNext/onError/onComplete) and replay all signals from that Publisher, effectively behaving like the fastest of these competing sources.

- I
The type of values in both source and output sequences
- sources
The competing source publishers
- returns
a new Flux behaving like the fastest of its sources
-
def
first[I](sources: Publisher[_ <: I]*): Flux[I]
Pick the first Publisher to emit any signal (onNext/onError/onComplete) and replay all signals from that Publisher, effectively behaving like the fastest of these competing sources.
Pick the first Publisher to emit any signal (onNext/onError/onComplete) and replay all signals from that Publisher, effectively behaving like the fastest of these competing sources.

- I
The type of values in both source and output sequences
- sources
The competing source publishers
- returns
a new Flux behaving like the fastest of its sources
-
def
from[T](source: Publisher[_ <: T]): Flux[T]
Expose the specified Publisher with the Flux API.
-
def
fromArray[T <: AnyRef](array: Array[T]): Flux[T]
Create a Flux that emits the items contained in the provided scala.Array.
-
def
fromIterable[T](it: Iterable[T]): Flux[T]
Create a Flux that emits the items contained in the provided Iterable.
-
def
fromStream[T](streamSupplier: () ⇒ Stream[T]): Flux[T]
Create a Flux that emits the items contained in a Stream created by the provided Function0 for each subscription.
Create a Flux that emits the items contained in a Stream created by the provided Function0 for each subscription. The Stream is closed automatically by the operator on cancellation, error or completion.

- T
The type of values in the source Stream and resulting Flux
- streamSupplier
the Function0 that generates the Stream from which to read data
- returns
a new Flux
-
def
fromStream[T](s: Stream[T]): Flux[T]
Create a Flux that emits the items contained in the provided Stream.
Create a Flux that emits the items contained in the provided Stream. Keep in mind that a Stream cannot be re-used, which can be problematic in case of multiple subscriptions or re-subscription (like with Flux.repeat() or Flux.retry). The Stream is closed automatically by the operator on cancellation, error or completion.

- T
The type of values in the source Stream and resulting Flux
- s
the Stream to read data from
- returns
a new Flux
-
def
generate[T, S](stateSupplier: Option[Callable[S]], generator: (S, SynchronousSink[T]) ⇒ S, stateConsumer: (Option[S]) ⇒ Unit): Flux[T]
Generate signals one-by-one via a function callback.
Generate signals one-by-one via a function callback.

- T
the value type emitted
- S
the custom state per subscriber
- stateSupplier
called for each incoming Supplier to provide the initial state for the generator bifunction
- generator
the bifunction called with the current state, the SynchronousSink API instance and is expected to return a (new) state.
- stateConsumer
called after the generator has terminated or the downstream cancelled, receiving the last state to be handled (i.e., release resources or do other cleanup).
- returns
a Reactive Flux publisher ready to be subscribed
-
def
generate[T, S](stateSupplier: Option[Callable[S]], generator: (S, SynchronousSink[T]) ⇒ S): Flux[T]
Generate signals one-by-one via a function callback.
Generate signals one-by-one via a function callback.

- T
the value type emitted
- S
the custom state per subscriber
- stateSupplier
called for each incoming Supplier to provide the initial state for the generator bifunction
- generator
the bifunction called with the current state, the SynchronousSink API instance and is expected to return a (new) state.
- returns
a Reactive Flux publisher ready to be subscribed
-
def
generate[T](generator: (SynchronousSink[T]) ⇒ Unit): Flux[T]
Generate signals one-by-one via a consumer callback.
Generate signals one-by-one via a consumer callback.

- T
the value type emitted
- generator
the consumer called with the SynchronousSink API instance
- returns
a Reactive Flux publisher ready to be subscribed
-
final
def
getClass(): Class[_]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
hashCode(): Int
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
-
def
interval(delay: Duration, period: Duration, timer: Scheduler): Flux[Long]
Create a new Flux that emits an ever incrementing long starting with 0 every N period of time unit on the given timer.
Create a new Flux that emits an ever incrementing long starting with 0 every N period of time unit on the given timer. If demand is not produced in time, an onError will be signalled. The Flux will never complete.
- delay
the timespan in milliseconds to wait before emitting 0l
- period
the period in milliseconds before each following increment
- timer
the Scheduler to schedule on
- returns
a new timed Flux
-
def
interval(period: Duration, timer: Scheduler): Flux[Long]
Create a new Flux that emits an ever incrementing long starting with 0 every N milliseconds on the given timer.
Create a new Flux that emits an ever incrementing long starting with 0 every N milliseconds on the given timer. If demand is not produced in time, an onError will be signalled. The Flux will never complete.

- period
The duration in milliseconds to wait before the next increment
- timer
a Scheduler instance
- returns
a new timed Flux
-
def
interval(delay: Duration, period: Duration): Flux[Long]
Create a new Flux that emits an ever incrementing long starting with 0 every N period of time unit on a global timer.
Create a new Flux that emits an ever incrementing long starting with 0 every N period of time unit on a global timer. If demand is not produced in time, an onError will be signalled. The Flux will never complete.
- delay
the delay to wait before emitting 0l
- period
the period before each following increment
- returns
a new timed Flux
-
def
interval(period: Duration): Flux[Long]
Create a new Flux that emits an ever incrementing long starting with 0 every period on the global timer.
-
final
def
isInstanceOf[T0]: Boolean
- Definition Classes
- Any
-
def
just[T](firstData: T, data: T*): Flux[T]
Create a new Flux that emits the specified items and then complete.
-
def
merge[I](prefetch: Int, sources: Publisher[_ <: I]*): Flux[I]
Merge data from Publisher sequences contained in an array / vararg into an interleaved merged sequence.
Merge data from Publisher sequences contained in an array / vararg into an interleaved merged sequence. Unlike concat, sources are subscribed to eagerly.

Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.
- I
The source type of the data sequence
- prefetch
the inner source request size
- sources
the array of Publisher sources to merge
- returns
a fresh Reactive Flux publisher ready to be subscribed
-
def
merge[I](sources: Publisher[_ <: I]*): Flux[I]
Merge data from Publisher sequences contained in an Iterable into an interleaved merged sequence.
Merge data from Publisher sequences contained in an Iterable into an interleaved merged sequence. Unlike concat, inner sources are subscribed to eagerly. A new Iterator will be created for each subscriber.

Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.
- I
The source type of the data sequence
- sources
the Iterable of sources to merge (will be lazily iterated on subscribe)
- returns
a merged Flux
-
def
merge[I](sources: Iterable[Publisher[_ <: I]]): Flux[I]
Merge data from Publisher sequences contained in an Iterable into an interleaved merged sequence.
Merge data from Publisher sequences contained in an Iterable into an interleaved merged sequence. Unlike concat, inner sources are subscribed to eagerly. A new Iterator will be created for each subscriber.

Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.
- I
The source type of the data sequence
- sources
the Iterable of sources to merge (will be lazily iterated on subscribe)
- returns
a merged Flux
-
def
merge[T](source: Publisher[Publisher[_ <: T]], concurrency: Int, prefetch: Int): Flux[T]
Merge data from Publisher sequences emitted by the passed Publisher into an interleaved merged sequence.
Merge data from Publisher sequences emitted by the passed Publisher into an interleaved merged sequence. Unlike concat, inner sources are subscribed to eagerly (but at most
concurrencysources are subscribed to at the same time).
Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.
- T
the merged type
- source
a Publisher of Publisher sources to merge
- concurrency
the request produced to the main source thus limiting concurrent merge backlog
- prefetch
the inner source request size
- returns
a merged Flux
-
def
merge[T](source: Publisher[Publisher[_ <: T]], concurrency: Int): Flux[T]
Merge data from Publisher sequences emitted by the passed Publisher into an interleaved merged sequence.
Merge data from Publisher sequences emitted by the passed Publisher into an interleaved merged sequence. Unlike concat, inner sources are subscribed to eagerly (but at most
concurrencysources are subscribed to at the same time).
Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.
- T
the merged type
- source
a Publisher of Publisher sources to merge
- concurrency
the request produced to the main source thus limiting concurrent merge backlog
- returns
a merged Flux
-
def
merge[T](source: Publisher[Publisher[_ <: T]]): Flux[T]
Merge data from Publisher sequences emitted by the passed Publisher into an interleaved merged sequence.
Merge data from Publisher sequences emitted by the passed Publisher into an interleaved merged sequence. Unlike concat, inner sources are subscribed to eagerly.

Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.
- T
the merged type
- source
a Publisher of Publisher sources to merge
- returns
a merged Flux
-
def
mergeDelayError[I](prefetch: Int, sources: Publisher[_ <: I]*): Flux[I]
Merge data from Publisher sequences contained in an array / vararg into an interleaved merged sequence.
Merge data from Publisher sequences contained in an array / vararg into an interleaved merged sequence. Unlike concat, sources are subscribed to eagerly. This variant will delay any error until after the rest of the merge backlog has been processed.

Note that merge is tailored to work with asynchronous sources or finite sources. When dealing with an infinite source that doesn't already publish on a dedicated Scheduler, you must isolate that source in its own Scheduler, as merge would otherwise attempt to drain it before subscribing to another source.
- I
The source type of the data sequence
- prefetch
the inner source request size
- sources
the array of Publisher sources to merge
- returns
a fresh Reactive Flux publisher ready to be subscribed
-
def
mergeSequential[I](sources: Iterable[Publisher[_ <: I]], maxConcurrency: Int, prefetch: Int): Flux[I]
Merge data from Publisher sequences provided in an Iterable into an ordered merged sequence.
Merge data from Publisher sequences provided in an Iterable into an ordered merged sequence. Unlike concat, sources are subscribed to eagerly (but at most
maxConcurrencysources at a time). Unlike merge, their emitted values are merged into the final sequence in subscription order.
- I
the merged type
- sources
an Iterable of Publisher sequences to merge
- maxConcurrency
the request produced to the main source thus limiting concurrent merge backlog
- prefetch
the inner source request size
- returns
a merged Flux, subscribing early but keeping the original ordering
-
def
mergeSequential[I](sources: Iterable[Publisher[_ <: I]]): Flux[I]
Merge data from Publisher sequences provided in an Iterable into an ordered merged sequence.
Merge data from Publisher sequences provided in an Iterable into an ordered merged sequence. Unlike concat, sources are subscribed to eagerly. Unlike merge, their emitted values are merged into the final sequence in subscription order.

- I
the merged type
- sources
an Iterable of Publisher sequences to merge
- returns
a merged Flux, subscribing early but keeping the original ordering
-
def
mergeSequential[I](prefetch: Int, sources: Publisher[_ <: I]*): Flux[I]
Merge data from Publisher sequences provided in an array/vararg into an ordered merged sequence.
Merge data from Publisher sequences provided in an array/vararg into an ordered merged sequence. Unlike concat, sources are subscribed to eagerly. Unlike merge, their emitted values are merged into the final sequence in subscription order.

- I
the merged type
- prefetch
the inner source request size
- sources
a number of Publisher sequences to merge
- returns
a merged Flux, subscribing early but keeping the original ordering
-
def
mergeSequential[I](sources: Publisher[_ <: I]*): Flux[_ <: I]
Merge data from Publisher sequences provided in an array/vararg into an ordered merged sequence.
Merge data from Publisher sequences provided in an array/vararg into an ordered merged sequence. Unlike concat, sources are subscribed to eagerly. Unlike merge, their emitted values are merged into the final sequence in subscription order.

- I
the merged type
- sources
a number of Publisher sequences to merge
- returns
a merged Flux, subscribing early but keeping the original ordering
-
def
mergeSequential[T](sources: Publisher[_ <: Publisher[_ <: T]], maxConcurrency: Int, prefetch: Int): Flux[T]
Merge data from Publisher sequences emitted by the passed Publisher into an ordered merged sequence.
Merge data from Publisher sequences emitted by the passed Publisher into an ordered merged sequence. Unlike concat, the inner publishers are subscribed to eagerly (but at most
maxConcurrencysources at a time). Unlike merge, their emitted values are merged into the final sequence in subscription order.
- T
the merged type
- sources
a Publisher of Publisher sources to merge
- maxConcurrency
the request produced to the main source thus limiting concurrent merge backlog
- prefetch
the inner source request size
- returns
a merged Flux, subscribing early but keeping the original ordering
-
def
mergeSequential[T](sources: Publisher[Publisher[T]]): Flux[T]
Merge data from Publisher sequences emitted by the passed Publisher into an ordered merged sequence.
Merge data from Publisher sequences emitted by the passed Publisher into an ordered merged sequence. Unlike concat, the inner publishers are subscribed to eagerly. Unlike merge, their emitted values are merged into the final sequence in subscription order.

- T
the merged type
- sources
a Publisher of Publisher sources to merge
- returns
a merged Flux, subscribing early but keeping the original ordering
-
def
mergeSequentialDelayError[I](sources: Iterable[Publisher[_ <: I]], maxConcurrency: Int, prefetch: Int): Flux[I]
Merge data from Publisher sequences provided in an Iterable into an ordered merged sequence.
Merge data from Publisher sequences provided in an Iterable into an ordered merged sequence. Unlike concat, sources are subscribed to eagerly (but at most
maxConcurrencysources at a time). Unlike merge, their emitted values are merged into the final sequence in subscription order. This variant will delay any error until after the rest of the mergeSequential backlog has been processed.
- I
the merged type
- sources
an Iterable of Publisher sequences to merge
- maxConcurrency
the request produced to the main source thus limiting concurrent merge backlog
- prefetch
the inner source request size
- returns
a merged Flux, subscribing early but keeping the original ordering
-
def
mergeSequentialDelayError[I](prefetch: Int, sources: Publisher[_ <: I]*): Flux[_ <: I]
Merge data from Publisher sequences provided in an array/vararg into an ordered merged sequence.
Merge data from Publisher sequences provided in an array/vararg into an ordered merged sequence. Unlike concat, sources are subscribed to eagerly. Unlike merge, their emitted values are merged into the final sequence in subscription order. This variant will delay any error until after the rest of the mergeSequential backlog has been processed.

- I
the merged type
- prefetch
the inner source request size
- sources
a number of Publisher sequences to merge
- returns
a merged Flux, subscribing early but keeping the original ordering
-
def
mergeSequentialDelayError[T](sources: Publisher[_ <: Publisher[_ <: T]], maxConcurrency: Int, prefetch: Int): Flux[T]
Merge data from Publisher sequences emitted by the passed Publisher into an ordered merged sequence.
Merge data from Publisher sequences emitted by the passed Publisher into an ordered merged sequence. Unlike concat, the inner publishers are subscribed to eagerly (but at most
maxConcurrencysources at a time). Unlike merge, their emitted values are merged into the final sequence in subscription order. This variant will delay any error until after the rest of the mergeSequential backlog has been processed.
- T
the merged type
- sources
a Publisher of Publisher sources to merge
- maxConcurrency
the request produced to the main source thus limiting concurrent merge backlog
- prefetch
the inner source request size
- returns
a merged Flux, subscribing early but keeping the original ordering
-
final
def
ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
-
def
never[T](): Flux[T]
Create a Flux that will never signal any data, error or completion signal.
-
final
def
notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
final
def
notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
-
def
push[T](emitter: (FluxSink[T]) ⇒ Unit, backpressure: OverflowStrategy): Flux[T]
Creates a Flux with multi-emission capabilities from a single threaded producer through the FluxSink API.
Creates a Flux with multi-emission capabilities from a single threaded producer through the FluxSink API.
This Flux factory is useful if one wants to adapt some other single-threaded multi-valued async API and not worry about cancellation and backpressure. For example:
Flux.push[String](emitter => { val al: ActionListener = e => { emitter.next(textField.getText()) }; // without cleanup support: button.addActionListener(al) // with cleanup support: button.addActionListener(al) emitter.onDispose(() => { button.removeListener(al) }); }, FluxSink.OverflowStrategy.LATEST)- T
the value type
- emitter
the consumer that will receive a FluxSink for each individual Subscriber.
- backpressure
the backpressure mode, see OverflowStrategy for the available backpressure modes
- returns
a Flux
-
def
push[T](emitter: (FluxSink[T]) ⇒ Unit): Flux[T]
Creates a Flux with multi-emission capabilities from a single threaded producer through the FluxSink API.
Creates a Flux with multi-emission capabilities from a single threaded producer through the FluxSink API.
This Flux factory is useful if one wants to adapt some other single=threaded multi-valued async API and not worry about cancellation and backpressure. For example:
Flux.push[String](emitter => { val al: ActionListener = e => { emitter.next(textField.getText()) } // without cleanup support: button.addActionListener(al) // with cleanup support: button.addActionListener(al) emitter.onDispose(() => { button.removeListener(al) }) }, FluxSink.OverflowStrategy.LATEST);- T
the value type
- emitter
the consumer that will receive a FluxSink for each individual Subscriber.
- returns
a Flux
-
def
range(start: Int, count: Int): Flux[Integer]
Build a Flux that will only emit a sequence of incrementing integer from
starttostart + countthen complete. -
def
switchOnNext[T](mergedPublishers: Publisher[Publisher[_ <: T]], prefetch: Int): Flux[T]
Build a reactor.core.publisher.FluxProcessor whose data are emitted by the most recent emitted Publisher.
Build a reactor.core.publisher.FluxProcessor whose data are emitted by the most recent emitted Publisher. The Flux will complete once both the publishers source and the last switched to Publisher have completed.
- T
the produced type
- mergedPublishers
The { @link Publisher} of switching { @link Publisher} to subscribe to.
- prefetch
the inner source request size
- returns
a reactor.core.publisher.FluxProcessor accepting publishers and producing T
-
def
switchOnNext[T](mergedPublishers: Publisher[Publisher[_ <: T]]): Flux[T]
Build a reactor.core.publisher.FluxProcessor whose data are emitted by the most recent emitted Publisher.
Build a reactor.core.publisher.FluxProcessor whose data are emitted by the most recent emitted Publisher. The Flux will complete once both the publishers source and the last switched to Publisher have completed.
- T
the produced type
- mergedPublishers
The { @link Publisher} of switching Publisher to subscribe to.
- returns
a reactor.core.publisher.FluxProcessor accepting publishers and producing T
-
final
def
synchronized[T0](arg0: ⇒ T0): T0
- Definition Classes
- AnyRef
-
def
toString(): String
- Definition Classes
- AnyRef → Any
-
def
using[T, D](resourceSupplier: () ⇒ D, sourceSupplier: (D) ⇒ Publisher[_ <: T], resourceCleanup: (D) ⇒ Unit, eager: Boolean): Flux[T]
Uses a resource, generated by a supplier for each individual Subscriber, while streaming the values from a Publisher derived from the same resource and makes sure the resource is released if the sequence terminates or the Subscriber cancels.
Uses a resource, generated by a supplier for each individual Subscriber, while streaming the values from a Publisher derived from the same resource and makes sure the resource is released if the sequence terminates or the Subscriber cancels.
- Eager resource cleanup happens just before the source termination and exceptions raised by the cleanup Consumer may override the terminal even.
- Non-eager cleanup will drop any exception.
- T
emitted type
- D
resource type
- resourceSupplier
a java.util.concurrent.Callable that is called on subscribe
- sourceSupplier
a Publisher factory derived from the supplied resource
- resourceCleanup
invoked on completion
- eager
true to clean before terminating downstream subscribers
- returns
new Stream
-
def
using[T, D](resourceSupplier: () ⇒ D, sourceSupplier: (D) ⇒ Publisher[_ <: T], resourceCleanup: (D) ⇒ Unit): Flux[T]
Uses a resource, generated by a supplier for each individual Subscriber, while streaming the values from a Publisher derived from the same resource and makes sure the resource is released if the sequence terminates or the Subscriber cancels.
Uses a resource, generated by a supplier for each individual Subscriber, while streaming the values from a Publisher derived from the same resource and makes sure the resource is released if the sequence terminates or the Subscriber cancels.
Eager resource cleanup happens just before the source termination and exceptions raised by the cleanup Consumer may override the terminal even.
- T
emitted type
- D
resource type
- resourceSupplier
a java.util.concurrent.Callable that is called on subscribe
- sourceSupplier
a Publisher factory derived from the supplied resource
- resourceCleanup
invoked on completion
- returns
new Flux
-
final
def
wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws( ... )
-
final
def
wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @native() @throws( ... )
-
def
zip[I, O](combinator: (Array[AnyRef]) ⇒ O, prefetch: Int, sources: Publisher[_ <: I]*): Flux[O]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations produced by the passed combinator function of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- I
the type of the input sources
- O
the combined produced type
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- prefetch
individual source request size
- sources
the Publisher array to iterate on Publisher.subscribe
- returns
a zipped Flux
-
def
zip[I, O](combinator: (Array[AnyRef]) ⇒ O, sources: Publisher[_ <: I]*): Flux[O]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations produced by the passed combinator function of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- I
the type of the input sources
- O
the combined produced type
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- sources
the Publisher array to iterate on Publisher.subscribe
- returns
a zipped Flux
-
def
zip[O](sources: Iterable[_ <: Publisher[_]], prefetch: Int, combinator: (Array[_]) ⇒ O): Flux[O]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations produced by the passed combinator function of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.
The Iterable.iterator will be called on each Publisher.subscribe.
- O
the combined produced type
- sources
the Iterable to iterate on Publisher.subscribe
- prefetch
the inner source request size
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a zipped Flux
-
def
zip[O](sources: Iterable[_ <: Publisher[_]], combinator: (Array[_]) ⇒ O): Flux[O]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations produced by the passed combinator function of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.
The Iterable.iterator will be called on each Publisher.subscribe.
- O
the combined produced type
- sources
the Iterable to iterate on Publisher.subscribe
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a zipped Flux
-
def
zip[T1, T2, T3, T4, T5, T6](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], source3: Publisher[_ <: T3], source4: Publisher[_ <: T4], source5: Publisher[_ <: T5], source6: Publisher[_ <: T6]): Flux[(T1, T2, T3, T4, T5, T6)]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- T1
type of the value from source1
- T2
type of the value from source2
- T3
type of the value from source3
- T4
type of the value from source4
- T5
type of the value from source5
- T6
type of the value from source6
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- source3
The third upstream Publisher to subscribe to.
- source4
The fourth upstream Publisher to subscribe to.
- source5
The fifth upstream Publisher to subscribe to.
- source6
The sixth upstream Publisher to subscribe to.
- returns
a zipped Flux
-
def
zip[T1, T2, T3, T4, T5](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], source3: Publisher[_ <: T3], source4: Publisher[_ <: T4], source5: Publisher[_ <: T5]): Flux[(T1, T2, T3, T4, T5)]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- T1
type of the value from source1
- T2
type of the value from source2
- T3
type of the value from source3
- T4
type of the value from source4
- T5
type of the value from source5
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- source3
The third upstream Publisher to subscribe to.
- source4
The fourth upstream Publisher to subscribe to.
- source5
The fifth upstream Publisher to subscribe to.
- returns
a zipped Flux
-
def
zip[T1, T2, T3, T4](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], source3: Publisher[_ <: T3], source4: Publisher[_ <: T4]): Flux[(T1, T2, T3, T4)]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- T1
type of the value from source1
- T2
type of the value from source2
- T3
type of the value from source3
- T4
type of the value from source4
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- source3
The third upstream Publisher to subscribe to.
- source4
The fourth upstream Publisher to subscribe to.
- returns
a zipped Flux
-
def
zip[T1, T2, T3](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], source3: Publisher[_ <: T3]): Flux[(T1, T2, T3)]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- T1
type of the value from source1
- T2
type of the value from source2
- T3
type of the value from source3
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- source3
The third upstream Publisher to subscribe to.
- returns
a zipped Flux
-
def
zip[T1, T2](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2]): Flux[(T1, T2)]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- T1
type of the value from source1
- T2
type of the value from source2
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- returns
a zipped Flux
-
def
zip[T1, T2, O](source1: Publisher[_ <: T1], source2: Publisher[_ <: T2], combinator: (T1, T2) ⇒ O): Flux[O]
"Step-Merge" especially useful in Scatter-Gather scenarios.
"Step-Merge" especially useful in Scatter-Gather scenarios. The operator will forward all combinations produced by the passed combinator function of the most recent items emitted by each source until any of them completes. Errors will immediately be forwarded.

- T1
type of the value from source1
- T2
type of the value from source2
- O
The produced output after transformation by the combinator
- source1
The first upstream Publisher to subscribe to.
- source2
The second upstream Publisher to subscribe to.
- combinator
The aggregate function that will receive a unique value from each upstream and return the value to signal downstream
- returns
a zipped Flux







