@Namespace(value="tensorflow::ops") @NoOffset public static class tensorflow.ApplyAdadelta extends Pointer
Update '*var' according to the adadelta scheme.
accum = rho() * accum + (1 - rho()) * grad.square();
update = (update_accum + epsilon).sqrt() * (accum + epsilon()).rsqrt() * grad;
update_accum = rho() * update_accum + (1 - rho()) * update.square();
var -= update;
Arguments:
* scope: A Scope object
* var: Should be from a Variable().
* accum: Should be from a Variable().
* accum_update: Should be from a Variable().
* lr: Scaling factor. Must be a scalar.
* rho: Decay factor. Must be a scalar.
* epsilon: Constant factor. Must be a scalar.
* grad: The gradient.
Optional attributes (see Attrs):
* use_locking: If True, updating of the var, accum and update_accum tensors will be protected by
a lock; otherwise the behavior is undefined, but may exhibit less contention.
Returns:
* Output: Same as "var".
| Modifier and Type | Class and Description |
|---|---|
static class |
tensorflow.ApplyAdadelta.Attrs
Optional attribute setters for ApplyAdadelta
|
Pointer.CustomDeallocator, Pointer.Deallocator, Pointer.NativeDeallocator| Constructor and Description |
|---|
ApplyAdadelta(Pointer p)
Pointer cast constructor.
|
ApplyAdadelta(tensorflow.Scope scope,
tensorflow.Input var,
tensorflow.Input accum,
tensorflow.Input accum_update,
tensorflow.Input lr,
tensorflow.Input rho,
tensorflow.Input epsilon,
tensorflow.Input grad) |
ApplyAdadelta(tensorflow.Scope scope,
tensorflow.Input var,
tensorflow.Input accum,
tensorflow.Input accum_update,
tensorflow.Input lr,
tensorflow.Input rho,
tensorflow.Input epsilon,
tensorflow.Input grad,
tensorflow.ApplyAdadelta.Attrs attrs) |
| Modifier and Type | Method and Description |
|---|---|
tensorflow.Input |
asInput() |
tensorflow.Output |
asOutput() |
tensorflow.Node |
node() |
tensorflow.Operation |
operation() |
tensorflow.ApplyAdadelta |
operation(tensorflow.Operation operation) |
tensorflow.Output |
out() |
tensorflow.ApplyAdadelta |
out(tensorflow.Output out) |
static tensorflow.ApplyAdadelta.Attrs |
UseLocking(boolean x) |
address, asBuffer, asByteBuffer, availablePhysicalBytes, calloc, capacity, capacity, close, deallocate, deallocate, deallocateReferences, deallocator, deallocator, equals, fill, formatBytes, free, hashCode, isNull, limit, limit, malloc, maxBytes, maxPhysicalBytes, memchr, memcmp, memcpy, memmove, memset, offsetof, parseBytes, physicalBytes, position, position, put, realloc, setNull, sizeof, toString, totalBytes, totalPhysicalBytes, withDeallocator, zeropublic ApplyAdadelta(Pointer p)
Pointer.Pointer(Pointer).public ApplyAdadelta(@Const @ByRef tensorflow.Scope scope, @ByVal tensorflow.Input var, @ByVal tensorflow.Input accum, @ByVal tensorflow.Input accum_update, @ByVal tensorflow.Input lr, @ByVal tensorflow.Input rho, @ByVal tensorflow.Input epsilon, @ByVal tensorflow.Input grad)
public ApplyAdadelta(@Const @ByRef tensorflow.Scope scope, @ByVal tensorflow.Input var, @ByVal tensorflow.Input accum, @ByVal tensorflow.Input accum_update, @ByVal tensorflow.Input lr, @ByVal tensorflow.Input rho, @ByVal tensorflow.Input epsilon, @ByVal tensorflow.Input grad, @Const @ByRef tensorflow.ApplyAdadelta.Attrs attrs)
@ByVal @Name(value="operator tensorflow::Output") public tensorflow.Output asOutput()
@ByVal @Name(value="operator tensorflow::Input") public tensorflow.Input asInput()
public tensorflow.Node node()
@ByVal public static tensorflow.ApplyAdadelta.Attrs UseLocking(@Cast(value="bool") boolean x)
@ByRef public tensorflow.Operation operation()
public tensorflow.ApplyAdadelta operation(tensorflow.Operation operation)
@ByRef public tensorflow.Output out()
public tensorflow.ApplyAdadelta out(tensorflow.Output out)
Copyright © 2019. All rights reserved.