\newcommand{\B}[1]{{\bf #1}}
\newcommand{\R}[1]{{\rm #1}}
Manual split into sections | Manual as one web page | |
Math displayed using Latex |
pycppad.htm
| _printable.htm |
Math displayed using MathML |
pycppad.xml
| _printable.xml |
from pycppad import *
from pycppad import *
def pycppad_test_get_started() :
def F(x) : # function to be differentiated
return exp(-(x[0]**2. + x[1]**2.) / 2.) # is Gaussian density
x = numpy.array( [ 1., 2.] )
a_x = independent(x)
a_y = numpy.array( [ F(a_x) ] )
f = adfun(a_x, a_y)
J = f.jacobian(x) # J = F'(x)
assert abs( J[0, 0] + F(x) * x[0] ) < 1e-10 # J[0,0] ~= - F(x) * x[0]
assert abs( J[0, 1] + F(x) * x[1] ) < 1e-10 # J[0,1] ~= - F(x) * x[1]
_contents: 1 | Table of Contents |
install: 2 | Installing pycppad |
get_started.py: 3 | get_started: Example and Test |
example: 4 | List of All the pycppad Examples |
ad_variable: 5 | AD Variable Methods |
ad_function: 6 | AD Function Methods |
two_levels.py: 7 | Using Two Levels of AD: Example and Test |
runge_kutta_4: 8 | Fourth Order Runge Kutta |
whats_new_12: 9 | Extensions, Bug Fixes, and Changes During 2012 |
license: 10 | License |
_reference: 11 | Alphabetic Listing of Cross Reference Tags |
_index: 12 | Keyword Index |
_external: 13 | External Internet References |
pycppad-20121020: A Python Algorithm Derivative Package: : pycppad Table of Contents: 1: _contents Installing pycppad: 2: install get_started: Example and Test: 3: get_started.py List of All the pycppad Examples: 4: example AD Variable Methods: 5: ad_variable Create an Object With One Higher Level of AD: 5.1: ad ad: Example and Test: 5.1.1: ad.py Create an Object With One Lower Level of AD: 5.2: value value: Example and Test: 5.2.1: value.py Unary Plus and Minus Operators: 5.3: ad_unary Unary Plus and Minus Operators: Example and Test: 5.3.1: ad_unary.py Binary Numeric Operators With an AD Result: 5.4: ad_numeric Binary Numeric Operators With an AD Result: Example and Test: 5.4.1: ad_numeric.py Computed Assignment Operators: 5.5: assign_op Computed Assignment Operators: Example and Test: 5.5.1: assign_op.py Binary Comparison Operators: 5.6: compare_op a_float Comparison Operators: Example and Test: 5.6.1: compare_op.py Standard Math Unary Functions: 5.7: std_math Standard Math Unary Functions: Example and Test: 5.7.1: std_math.py Absolute Value Functions: 5.8: abs abs: Example and Test: 5.8.1: abs.py Conditional Expressions: 5.9: condexp condexp: Example and Test: 5.9.1: condexp.py AD Function Methods: 6: ad_function Create an Independent Variable Vector: 6.1: independent independent: Example and Test: 6.1.1: independent.py Create an AD Function Object: 6.2: adfun adfun: Example and Test: 6.2.1: adfun.py Abort a Recording of AD Operations: 6.3: abort_recording abort_recording: Example and Test: 6.3.1: abort_recording.py Forward Mode: Derivative in One Domain Direction: 6.4: forward Forward Order Zero: Example and Test: 6.4.1: forward_0.py Forward Order One: Example and Test: 6.4.2: forward_1.py Reverse Mode: Derivative in One Range Direction: 6.5: reverse Reverse Order One: Example and Test: 6.5.1: reverse_1.py Reverse Order Two: Example and Test: 6.5.2: reverse_2.py Driver for Computing Entire Derivative: 6.6: jacobian Entire Derivative: Example and Test: 6.6.1: jacobian.py Driver for Computing Hessian in a Range Direction: 6.7: hessian Hessian Driver: Example and Test: 6.7.1: hessian.py Optimize an AD Function Object Tape: 6.8: optimize Optimize Function Object: Example and Test: 6.8.1: optimize.py Using Two Levels of AD: Example and Test: 7: two_levels.py Fourth Order Runge Kutta: 8: runge_kutta_4 runge_kutta_4 A Correctness Example and Test: 8.1: runge_kutta_4_correct.py runge_kutta_4 An AD Example and Test: 8.2: runge_kutta_4_ad.py runge_kutta_4 With C++ Speed: Example and Test: 8.3: runge_kutta_4_cpp.py Extensions, Bug Fixes, and Changes During 2012: 9: whats_new_12 Extensions, Bug Fixes, and Changes During 2011: 9.1: whats_new_11 Extensions, Bug Fixes, and Changes During 2010: 9.2: whats_new_10 Extensions, Bug Fixes, and Changes During 2009: 9.3: whats_new_09 License: 10: license Alphabetic Listing of Cross Reference Tags: 11: _reference Keyword Index: 12: _index External Internet References: 13: _external
tar -xvzf pycppad-20121020.tar.gz
which would create the directory pycppad-20121020
.
pycppad-20121020/setup.py
must be set to agree with your system:
# Directory where CppAD include files are located
cppad_include_dir = [ '/usr/include' ]
cppad_include_dir = [ os.environ['HOME'] + '/prefix/cppad/include' ]
# Directory where Boost Python library and include files are located
boost_python_include_dir = [ '/usr/include' ]
boost_python_lib_dir = [ '/usr/lib' ]
# Name of the Boost Python library in boost_python_lib_dir.
boost_python_lib = [ 'boost_python-mt' ]
Note that Boost Python and CppAD must be installed before you can
properly set this information.
pycppad-20121020
and execute the command
./setup.py build_ext --inplace --debug --undef NDEBUG
to compile and link a version of the CppAD extension module
with debugging (improved error messaging).
./setup.py build_ext --inplace
Note that in the optimized version, certain error checking is not
done and improper use of pycppad.cpp
may lead to a segmentation fault.
gcc
, the warning
cc1plus: warning: command line option "-Wstrict-prototypes" is valid for Ada/C/ObjC but not for C++
will be printed.
This is not requested by setup.py
but is rather a bug
in the Python
distutils
(http://docs.python.org/distutils/)
package.
rebaseall
program.
For example, execute the following steps:
ps -e
command,
and shut them all down.
Start | Run
and then enter cmd
in as the program to run.
bin
directory.
If you installed cygwin in the default location, the following command
will do this:
cd c:\cygwin\bin
ash
shell by executing the command
ash.exe
/usr/bin/rebaseall
This will take a few minutes to execute.
When it is done, you can close the command window by executing
the command exit
twice.
pycppad
documentation.
Change into the directory pycppad-20121020
and execute the command
python test_example.py
You can run some more tests with the command
python test_more.py with_debugging
where
with_debugging
is True
or False
depending on if you built
2.e.a: with debugging
.
cppad_
being missing,
it is probably because you did not use the --inplace
flag
when you 2.e: built
pycppad
.
pycppad
copies it to the standard location for you system.
You may or may not preform this step:
Change into the directory pycppad-20121020
and execute the command
./setup.py build_ext --debug --undef NDEBUG install --prefix=prefix
or
./setup.py build_ext install --prefix=prefix
where
prefix
is the prefix for the location
where you wish to install pycppad
.
Note that some common choices for
prefix
are
$HOME | if you are not the system administrator for this machine |
/usr/local |
you're administrator and the system
has a package manager (like yum , apt-get ).
|
/usr | you're the administrator but the system does not have a package manager. |
distutils
package does not provide
an uninstall command. You can uninstall the pycppad
package
by removing the entries
prefix/lib/pythonmajor.minor/site-packages/pycppad
prefix/lib/pythonmajor.minor/site-packages/pycppad-20121020.egg-info
prefix/share/doc/pycppad
where
major
and
minor
are the major and minor version numbers printed by the command
python --version
pycppad
,
you will not be able to use the python command
import pycppad
unless the distribution directory
pycppad-20121020
is in your python path.
If you have installed pycppad
,
the installation directory
prefix/lib/pythonmajor.minor/site-packages
must be in you python path (to use the import pycppad
command).
You can check your python path with the following commands
python
import sys
sys.path
If the required directory is not yet there,
you could add the directory above to your python path using the command
sys.path.append('prefix/lib/pythonmajor.minor/site-packages')
You can avoid having to do this every time you load python by adding
the path to your environment variable PYTHONPATH
.
For example, if you are using the bash
shell, you could
add the command
export PYTHONPATH="prefix/lib/pythonmajor.minor/site-packages"
to your $HOME/.bashrc
file.
pycppad
starts out in the directory
pycppad-20121020/doc
During the installation process, it is copied to the directory
prefix/share/doc/pycppad
F : \B{R}^2 \rightarrow \B{R}
and its partial derivatives :
\[
\begin{array}{rcl}
F(x) & = & \exp \left[ - ( x_0^2 + x_1^2 ) / 2. \right] \\
\partial_{x(0)} F(x) & = & - F(x) * x_0 \\
\partial_{x(1)} F(x) & = & - F(x) * x_1
\end{array}
\]
The following Python code computes these derivatives using pycppad
and then checks the results for correctness:
from pycppad import *
def pycppad_test_get_started() :
def F(x) : # function to be differentiated
return exp(-(x[0]**2. + x[1]**2.) / 2.) # is Gaussian density
x = numpy.array( [ 1., 2.] )
a_x = independent(x)
a_y = numpy.array( [ F(a_x) ] )
f = adfun(a_x, a_y)
J = f.jacobian(x) # J = F'(x)
assert abs( J[0, 0] + F(x) * x[0] ) < 1e-10 # J[0,0] ~= - F(x) * x[0]
assert abs( J[0, 1] + F(x) * x[1] ) < 1e-10 # J[0,1] ~= - F(x) * x[1]
6.3.1: abort_recording.py | abort_recording: Example and Test |
5.8.1: abs.py | abs: Example and Test |
5.1.1: ad.py | ad: Example and Test |
6.2.1: adfun.py | adfun: Example and Test |
5.4.1: ad_numeric.py | Binary Numeric Operators With an AD Result: Example and Test |
5.3.1: ad_unary.py | Unary Plus and Minus Operators: Example and Test |
5.5.1: assign_op.py | Computed Assignment Operators: Example and Test |
5.9.1: condexp.py | condexp: Example and Test |
5.6.1: compare_op.py | a_float Comparison Operators: Example and Test |
6.4.1: forward_0.py | Forward Order Zero: Example and Test |
6.4.2: forward_1.py | Forward Order One: Example and Test |
3: get_started.py | get_started: Example and Test |
6.7.1: hessian.py | Hessian Driver: Example and Test |
6.1.1: independent.py | independent: Example and Test |
6.6.1: jacobian.py | Entire Derivative: Example and Test |
6.8.1: optimize.py | Optimize Function Object: Example and Test |
6.5.1: reverse_1.py | Reverse Order One: Example and Test |
6.5.2: reverse_2.py | Reverse Order Two: Example and Test |
8.2: runge_kutta_4_ad.py | runge_kutta_4 An AD Example and Test |
8.3: runge_kutta_4_cpp.py | runge_kutta_4 With C++ Speed: Example and Test |
8.1: runge_kutta_4_correct.py | runge_kutta_4 A Correctness Example and Test |
5.7.1: std_math.py | Standard Math Unary Functions: Example and Test |
7: two_levels.py | Using Two Levels of AD: Example and Test |
5.2.1: value.py | value: Example and Test |
ad: 5.1 | Create an Object With One Higher Level of AD |
value: 5.2 | Create an Object With One Lower Level of AD |
ad_unary: 5.3 | Unary Plus and Minus Operators |
ad_numeric: 5.4 | Binary Numeric Operators With an AD Result |
assign_op: 5.5 | Computed Assignment Operators |
compare_op: 5.6 | Binary Comparison Operators |
std_math: 5.7 | Standard Math Unary Functions |
abs: 5.8 | Absolute Value Functions |
condexp: 5.9 | Conditional Expressions |
a_x = ad(x)
a_x
that records floating point operations.
An 6.2: adfun
object can later use this recording to evaluate
function values and derivatives. These later evaluations are done
using the same type as
x
(except when
x
is an instance of int
,
the later evaluations are done using float
operations).
x
can be an instance of an int
(AD level 0),
or an instance of float
(AD level 0),
or an a_float
(AD level 1).
The argument
x
may also be a numpy.array
with one of the
element types listed in the previous sentence.
x
is an instance of int
or float
,
a_x
is an a_float
(AD level 1).
If
x
is an a_float
,
a_x
is an a2float
(AD level 2).
If
x
is an numpy.array
,
a_x
is also an numpy.array
with the
same shape as
x
.
from pycppad import *
import numpy
def pycppad_test_ad() :
x = 1
a_x = ad(x)
a2x = ad(a_x)
#
assert type(a_x) == a_float and a_x == x
assert type(a2x) == a2float and a2x == x
#
x = numpy.array( [ 1 , 2 , 3 ] )
a_x = ad(x)
a2x = ad(a_x)
#
for i in range( len(a_x) ) :
assert type(a_x[i]) == a_float and a_x[i] == x[i]
for i in range( len(a2x) ) :
assert type(a2x[i]) == a2float and a2x[i] == x[i]
x = value(a_x)
a_x
must be an a_float
(AD level 1),
or an a2float
(AD level 2).
The argument
a_x
may also be a numpy.array
with one of the
element types listed in the previous sentence.
a_x
is an a_float
,
x
is a float
(AD level 0).
If
a_x
is an a2float
,
x
is an a_float
(AD level 1).
If
a_x
is an numpy.array
,
x
is also an numpy.array
with the
same shape as
a_x
.
from pycppad import *
# Example using a_float ------------------------------------------------------
def pycppad_test_value() :
x = 2
a_x = ad(x)
#
assert type(value(a_x)) == float and value(a_x) == x
#
x = numpy.array( [ 1 , 2 , 3 ] )
a_x = ad(x)
#
for i in range( len(a_x) ) :
xi = value(a_x[i])
assert type(xi) == float and xi == x[i]
# Example using a2float ------------------------------------------------------
def pycppad_test_value_a2() :
x = 2
a2x = ad(ad(x))
#
assert type(value(a2x)) == a_float and value(a2x) == x
#
x = numpy.array( [ 1 , 2 , 3 ] )
a2x = ad(ad(x))
#
for i in range( len(a2x) ) :
a_xi = value(a2x[i])
assert type(a_xi) == a_float and a_xi == x[i]
y = + x
y = - x
+
( -
) above results
in
z
equal to
x
(minus
x
).
x
can be a_float
or a2float
and the result
z
will have the same type as
x
.
x
may be
a numpy.array
with elements of type
a_float
or a2float
.
In this case, the result
z
is an array with the same shape
and element type as
x
.
from pycppad import *
import numpy
# Example using a_float ------------------------------------------------------
def pycppad_test_ad_unary() :
x = ad(2.)
plus_x = + x
minus_x = - x
# test using corresponding unary float operators
assert value(plus_x) == + value(x)
assert value(minus_x) == - value(x)
#
x = ad( numpy.array( [ 1. , 2. ] ) )
plus_x = + x
minus_x = - x
# test using corresponding unary float operators
assert numpy.all( value(plus_x) == + value(x) )
assert numpy.all( value(minus_x) == - value(x) )
# Example using a2float ------------------------------------------------------
def pycppad_test_ad_unary_a2() :
x = ad( ad(2.) )
plus_x = + x
minus_x = - x
# test using corresponding unary a_float operators
assert value(plus_x) == + value(x)
assert value(minus_x) == - value(x)
#
x = ad( ad( numpy.array( [ 1. , 2. ] ) ) )
plus_x = + x
minus_x = - x
# test using corresponding unary float operators
assert numpy.all( value(plus_x) == + value(x) )
assert numpy.all( value(minus_x) == - value(x) )
z = x op y
z
to the result of the binary operation defined by
op
and with
x
as the left operand and
y
as the right operand.
op
are
op
| Meaning |
+ | addition |
- | subtraction |
* | multiplication |
/ | division |
** | exponentiation |
x
and
y
and the corresponding result type for
z
.
y
x float a_float a2float
-------------------------------
float - float a_float a2float
a_float - a_float a_float
a2float - a2float a2float
The type float
does not need to be matched exactly
but rather as an instance of float
.
x
or
y
or both may be
a numpy.array
with elements
that match one of possible type choices above.
If both
x
and
y
are arrays, they must have the same shape.
When either
x
or
y
is an array,
the result
z
is an array with the same shape.
The type of the elements of
z
correspond to the table above
(when the result type is a float
,
this only refers to the element types matching as instances).
abs
.
from pycppad import *
# Example using a_float -----------------------------------------------------
def pycppad_test_ad_numeric() :
x = 2.
y = 3.
a_x = ad(x)
a_y = ad(y)
#
assert a_x + a_y == x + y
assert a_x + y == x + y
assert x + a_y == x + y
#
assert a_x - a_y == x - y
assert a_x - y == x - y
assert x - a_y == x - y
#
assert a_x * a_y == x * y
assert a_x * y == x * y
assert x * a_y == x * y
#
assert a_x / a_y == x / y
assert a_x / y == x / y
assert x / a_y == x / y
#
assert a_x ** a_y == x ** y
assert a_x ** y == x ** y
assert x ** a_y == x ** y
#
# Example using a2float -----------------------------------------------------
def pycppad_test_ad_numeric_a2() :
x = 2.
y = 3.
a2x = ad(ad(x))
a2y = ad(ad(y))
#
assert a2x + a2y == x + y
assert a2x + y == x + y
assert x + a2y == x + y
#
assert a2x - a2y == x - y
assert a2x - y == x - y
assert x - a2y == x - y
#
assert a2x * a2y == x * y
assert a2x * y == x * y
assert x * a2y == x * y
#
assert a2x / a2y == x / y
assert a2x / y == x / y
assert x / a2y == x / y
#
assert a2x ** a2y == x ** y
assert a2x ** y == x ** y
assert x ** a2y == x ** y
#
u op= x
y
(
z
) to refer to the value of
u
before (after) the operation.
This operation sets
z
equal to
y op x
.
op
are
op
| Meaning |
+ | addition |
- | subtraction |
* | multiplication |
/ | division |
x
and
y
(the value of
u
before the operation)
and the corresponding
z
(the value of
u
after the operation).
y
x float a_float a2float
-------------------------------
float - float a_float a2float
a_float - a_float a_float
a2float - a2float a2float
The type float
does not need to be matched exactly
but rather as an instance of float
.
x
or
y
or both may be
a numpy.array
with elements
that match one of possible type choices above.
If both
x
and
y
are arrays, they must have the same shape.
When either
x
or
y
is an array,
the result
z
is an array with the same shape.
The type of the elements of
z
correspond to the table above
(when the result type is a float
,
this only refers to the element types matching as instances).
# Example using a_float ------------------------------------------------------
from pycppad import *
def pycppad_test_assign_op() :
x = 2.
y = 3.
#
tmp = ad(x)
tmp += ad(y)
assert tmp == x + y
tmp = ad(x)
tmp += y
assert tmp == x + y
#
tmp = ad(x)
tmp -= ad(y)
assert tmp == x - y
tmp = ad(x)
tmp -= y
assert tmp == x - y
#
tmp = ad(x)
tmp *= ad(y)
assert tmp == x * y
tmp = ad(x)
tmp *= y
assert tmp == x * y
#
tmp = ad(x)
tmp /= ad(y)
assert tmp == x / y
tmp = ad(x)
tmp /= y
assert tmp == x / y
# Example using a2float ------------------------------------------------------
from pycppad import *
def pycppad_test_assign_op_a2() :
x = 2.
y = 3.
#
tmp = ad(ad(x))
tmp += ad(ad(y))
assert tmp == x + y
tmp = ad(ad(x))
tmp += y
assert tmp == x + y
#
tmp = ad(ad(x))
tmp -= ad(ad(y))
assert tmp == x - y
tmp = ad(ad(x))
tmp -= y
assert tmp == x - y
#
tmp = ad(ad(x))
tmp *= ad(ad(y))
assert tmp == x * y
tmp = ad(ad(x))
tmp *= y
assert tmp == x * y
#
tmp = ad(ad(x))
tmp /= ad(ad(y))
assert tmp == x / y
tmp = ad(ad(x))
tmp /= y
assert tmp == x / y
z = x op y
z
to the result of the binary operation defined by
op
and with
x
as the left operand and
y
as the right operand.
op
are
op
| Meaning |
> | greater than |
> | less than |
>= | greater than or equal |
<= | less than or equal |
== | equal |
!= | not equal |
x
and
y
.
The corresponding result type for
z
is always bool
.
y
x float a_float a2float
-------------------------------
float - yes yes yes
a_float - yes yes no
a2float - yes no yes
The type float
does not need to be matched exactly
but rather as an instance of float
.
x
or
y
or both may be
a numpy.array
with elements
that match one of possible type choices above.
If both
x
and
y
are arrays, they must have the same shape.
When either
x
or
y
is an array,
the result
z
is an array with the same shape.
The type of the elements of
z
correspond to the table above
(when the result type is a float
,
this only refers to the element types matching as instances).
from pycppad import *
# Example using a_float ------------------------------------------------------
def pycppad_test_compare_op():
x = ad(2.)
y = ad(3.)
z = ad(2.)
# assert comparisons that should be true
assert x == x
assert x == z
assert x != y
assert x <= x
assert x <= z
assert x <= y
assert x < y
# assert comparisons that should be false
assert not x == y
assert not x != z
assert not x != x
assert not x >= y
assert not x > y
# Example using a2float ------------------------------------------------------
def pycppad_test_compare_op_a2():
x = ad(ad(2.))
y = ad(ad(3.))
z = ad(ad(2.))
# assert comparisons that should be true
assert x == x
assert x == z
assert x != y
assert x <= x
assert x <= z
assert x <= y
assert x < y
# assert comparisons that should be false
assert not x == y
assert not x != z
assert not x != x
assert not x >= y
assert not x > y
y = fun(x)
fun
where
fun
has one argument.
x
can be an instance of float
,
an a_float
, an a2float
, or a numpy.array
of such objects.
x
is an instance of float
,
y
will also be an instance of float
.
Otherwise
y
will have the same type as
x
.
In the case where
x
is an array,
y
will
the same shape as
x
and the elements of
y
will have the same type as the elements of
x
.
fun
can be any of the following:
arccos
,
arcsin
,
arctan
,
cos
,
cosh
,
exp
,
log
,
log10
,
sin
,
sinh
,
sqrt
,
tan
, or
tanh
.
from pycppad import *
import numpy
import math
# Example using a_float ----------------------------------------------------
def pycppad_test_std_math():
delta = 10. * numpy.finfo(float).eps
pi = numpy.pi
x = pi / 6
a_x = ad(x)
# all the a_float unary standard math functions
assert abs( arccos(a_x) - math.acos(x) ) < delta
assert abs( arcsin(a_x) - math.asin(x) ) < delta
assert abs( arctan(a_x) - math.atan(x) ) < delta
assert abs( cos(a_x) - math.cos(x) ) < delta
assert abs( cosh(a_x) - math.cosh(x) ) < delta
assert abs( exp(a_x) - math.exp(x) ) < delta
assert abs( log(a_x) - math.log(x) ) < delta
assert abs( log10(a_x) - math.log10(x) ) < delta
assert abs( sin(a_x) - math.sin(x) ) < delta
assert abs( sinh(a_x) - math.sinh(x) ) < delta
assert abs( sqrt(a_x) - math.sqrt(x) ) < delta
assert abs( tan(a_x) - math.tan(x) ) < delta
assert abs( tanh(a_x) - math.tanh(x) ) < delta
# example array and derivative calculation
n = 5
x = numpy.array( [2 * pi * j / n for j in range(n) ] )
a_x = independent(x)
a_y = sin(a_x)
f = adfun(a_x, a_y)
J = f.jacobian(x)
for j in range(n) :
for k in range(n) :
if j == k : assert abs( J[j][k] - cos( x[j] ) ) < delta
else : assert J[j][k] == 0.
# Example using a2float ----------------------------------------------------
def pycppad_test_std_math_a2():
n = 10
delta = 10. * numpy.finfo(float).eps
pi = numpy.pi
x = pi / 6
a2x = ad(ad(x))
# all the a2float unary standard math functions
assert abs( arccos(a2x) - math.acos(x) ) < delta
assert abs( arcsin(a2x) - math.asin(x) ) < delta
assert abs( arctan(a2x) - math.atan(x) ) < delta
assert abs( cos(a2x) - math.cos(x) ) < delta
assert abs( cosh(a2x) - math.cosh(x) ) < delta
assert abs( exp(a2x) - math.exp(x) ) < delta
assert abs( log(a2x) - math.log(x) ) < delta
assert abs( log10(a2x) - math.log10(x) ) < delta
assert abs( sin(a2x) - math.sin(x) ) < delta
assert abs( sinh(a2x) - math.sinh(x) ) < delta
assert abs( sqrt(a2x) - math.sqrt(x) ) < delta
assert abs( tan(a2x) - math.tan(x) ) < delta
assert abs( tanh(a2x) - math.tanh(x) ) < delta
# example array and derivative calculation
n = 5
x = numpy.array( [2 * pi * j / n for j in range(n) ] )
a_x = ad(x)
a2x = independent(a_x)
a2y = sin(a2x)
a_f = adfun(a2x, a2y)
a_J = a_f.jacobian(a_x)
for j in range(n) :
for k in range(n) :
if j == k : assert abs( a_J[j][k] - cos( x[j] ) ) < delta
else : assert a_J[j][k] == 0.
y = abs(x)
y
equal to the absolute value of
x
.
x
can be an instance of float
,
an a_float
, an a2float
, or an numpy.array
of such objects.
x
is an instance of float
,
y
will also be an instance of
float
.
Otherwise
y
will have the same type as
x
.
In the case where
x
is an array,
y
will
the same shape as
x
and the elements of
y
will have the same type as the elements of
x
.
\[
\R{abs}^{(1)} (x) = \R{sign} (x) = \left\{ \begin{array}{ll}
1 & \R{if} \; x > 0
\\
0 & \R{if} \; x = 0
\\
-1 & \R{if} \; x < 0
\end{array} \right.
\]
\[
\R{abs}^\circ ( x , d ) = \lim_{\lambda \downarrow 0 }
\frac{\R{abs}(x + \lambda d) - \R{abs}(x) }{ \lambda }
\]
For
x \neq 0
,
\[
\R{abs}^\circ ( x , d ) = \R{abs}^{(1)} ( x ) * d
\]
and
\R{abs}^\circ (0 , d) = |d|
.
# Example using a_float ----------------------------------------------------
from pycppad import *
def pycppad_test_abs() :
x = numpy.array( [ -1., 0., 1.] )
n = len(x)
a_x = independent(x)
a_y = abs( a_x )
f = adfun(a_x, a_y)
f.forward(0, x)
dx = numpy.zeros(n, dtype=float)
for i in range( n ) :
dx[i] = 1.
df = f.forward(1, dx)
if x[i] > 0. :
assert df[i] == +1.
elif x[i] < 0. :
assert df[i] == -1.
else :
# There was a change in the CppAD specifictions for the abs function
# see 12-30 on http://www.coin-or.org/CppAD/Doc/whats_new_11.htm
assert df[i] == +1. or df[i] == 0.
dx[i] = -1.
df = f.forward(1, dx)
if x[i] > 0. :
assert df[i] == -1.
elif x[i] < 0. :
assert df[i] == +1.
else :
assert df[i] == +1 or df[i] == 0.
dx[i] = 0.
# Example using a2float ----------------------------------------------------
def pycppad_test_abs_a2() :
x = ad( numpy.array( [-1, 0, 1] ) )
n = len(x)
a_x = independent(x)
a_y = abs( a_x )
f = adfun(a_x, a_y)
f.forward(0, x)
dx = numpy.array( list( ad(0) for i in range(n) ) )
for i in range( n ) :
dx[i] = ad(0)
for i in range( n ) :
dx[i] = ad(1)
df = f.forward(1, dx)
if x[i] > 0. :
assert df[i] == +1.
elif x[i] < 0. :
assert df[i] == -1.
else :
assert df[i] == +1. or df[i] == 0.
dx[i] = ad(-1)
df = f.forward(1, dx)
if x[i] > 0. :
assert df[i] == -1.
elif x[i] < 0. :
assert df[i] == +1.
else:
assert df[i] == +1. or df[i] == 0.
dx[i] = ad(0)
result = condexp_rel(left, right, if_true, if_false)
if( left op right )
result = if_true
else result = if_false
The relation
rel%
, and operator
op
,
have the following correspondence:
rel lt le eq ge gt
op < <= == > >=
rel
represents one of the following
two characters: lt
, le
, eq
, ge
, gt
.
As in the table above,
rel
determines which comparison operator
op
is used
when comparing
left
and
right
.
left
must have type a_float
or a2float
.
It specifies the value for the left side of the comparison operator.
right
must have the same type as
left
.
It specifies the value for the right side of the comparison operator.
if_true
must have the same type as
left
.
It specifies the return value if the result of the comparison is true.
if_false
must have the same type as
left
.
It specifies the return value if the result of the comparison is false.
left
.
# Example using a_float ----------------------------------------------------
from pycppad import *
def pycppad_test_condexp() :
x = numpy.array( [1. , 1., 3., 4. ] )
a_x = independent(x)
a_left = a_x[0];
a_right = a_x[1];
a_if_true = a_x[2];
a_if_false = a_x[3];
a_y_lt = condexp_lt(a_left, a_right, a_if_true, a_if_false);
a_y_le = condexp_le(a_left, a_right, a_if_true, a_if_false);
a_y_eq = condexp_eq(a_left, a_right, a_if_true, a_if_false);
a_y_ge = condexp_ge(a_left, a_right, a_if_true, a_if_false);
a_y_gt = condexp_gt(a_left, a_right, a_if_true, a_if_false);
a_y = numpy.array( [ a_y_lt, a_y_le, a_y_eq, a_y_ge, a_y_gt ] );
f = adfun(a_x, a_y)
y = f.forward(0, x)
assert ( y[0] == 4. ) # 1 < 1 is false so result is 4
assert ( y[1] == 3. ) # 1 <= 1 is true so result is 3
assert ( y[2] == 3. ) # 1 == 1 is true so result is 3
assert ( y[3] == 3. ) # 1 >= 1 is true so result is 3
assert ( y[4] == 4. ) # 1 > 2 is false so result is 4
x = numpy.array( [4., 3., 2., 1.] )
y = f.forward(0, x)
assert ( y[0] == 1. ) # 4 < 3 is false so result is 1
assert ( y[1] == 1. ) # 4 <= 3 is false so result is 1
assert ( y[2] == 1. ) # 4 == 3 is false so result is 1
assert ( y[3] == 2. ) # 4 >= 3 is true so result is 2
assert ( y[4] == 2. ) # 4 > 3 is true so result is 2
# Example using a2float ----------------------------------------------------
def pycppad_test_condexp_a2() :
x = numpy.array( [1. , 1., 3., 4. ] )
a_x = ad(x)
# begin level two recording of conditional expression
a2x = independent(a_x)
a2left = a2x[0];
a2right = a2x[1];
a2if_true = a2x[2];
a2if_false = a2x[3];
a2y_lt = condexp_lt(a2left, a2right, a2if_true, a2if_false);
a2y_le = condexp_le(a2left, a2right, a2if_true, a2if_false);
a2y_eq = condexp_eq(a2left, a2right, a2if_true, a2if_false);
a2y_ge = condexp_ge(a2left, a2right, a2if_true, a2if_false);
a2y_gt = condexp_gt(a2left, a2right, a2if_true, a2if_false);
a2y = numpy.array( [ a2y_lt, a2y_le, a2y_eq, a2y_ge, a2y_gt ] );
a_f = adfun(a2x, a2y)
# begin level one recording of conditional expression
a_x = independent(x)
a_y = a_f.forward(0, a_x)
f = adfun(a_x, a_y)
y = f.forward(0, x)
assert ( y[0] == 4. ) # 1 < 1 is false so result is 4
assert ( y[1] == 3. ) # 1 <= 1 is true so result is 3
assert ( y[2] == 3. ) # 1 == 1 is true so result is 3
assert ( y[3] == 3. ) # 1 >= 1 is true so result is 3
assert ( y[4] == 4. ) # 1 > 2 is false so result is 4
x = numpy.array( [4., 3., 2., 1.] )
y = f.forward(0, x)
assert ( y[0] == 1. ) # 4 < 3 is false so result is 1
assert ( y[1] == 1. ) # 4 <= 3 is false so result is 1
assert ( y[2] == 1. ) # 4 == 3 is false so result is 1
assert ( y[3] == 2. ) # 4 >= 3 is true so result is 2
assert ( y[4] == 2. ) # 4 > 3 is true so result is 2
independent: 6.1 | Create an Independent Variable Vector |
adfun: 6.2 | Create an AD Function Object |
abort_recording: 6.3 | Abort a Recording of AD Operations |
forward: 6.4 | Forward Mode: Derivative in One Domain Direction |
reverse: 6.5 | Reverse Mode: Derivative in One Range Direction |
jacobian: 6.6 | Driver for Computing Entire Derivative |
hessian: 6.7 | Driver for Computing Hessian in a Range Direction |
optimize: 6.8 | Optimize an AD Function Object Tape |
a_x = independent(x)
type(a_x[0])
.
You must create an 6.2: adfun
object, or use 6.3: abort_recording
,
to stop the recording before making another call to independent
,
x
must be a numpy.array
with one dimension
(i.e., a vector).
All the elements of
x
must all be of the same type and
instances of either int
, float
or a_float
.
a_x
is a numpy.array
with the same shape as
x
.
If the elements of
x
are instances of int
or float
the elements of
a_x
are instances of a_float
.
If the elements of
x
are instances of a_float
the elements of
a_x
are instances of a2float
.
The 5.2: value
of the elements of
a_x
are equal to the corresponding elements of
x
.
from pycppad import *
# Example using a_float ---------------------------------------------------
def pycppad_test_independent() :
x = numpy.array( [ 0., 0., 0. ] )
a_x = independent(x) # level 1 independent variables and start recording
assert type(a_x) == numpy.ndarray
for j in range(len(x)) :
assert isinstance(x[j], float)
assert isinstance(a_x[j], a_float)
assert a_x[j] == x[j]
f = adfun(a_x, a_x) # stop level 1 recording
# Example using a2float ---------------------------------------------------
def pycppad_test_independent_a2() :
x = numpy.array( [ 0., 0., 0. ] )
a_x = independent(x) # level 1 independent variables and start recording
a2x = independent(a_x) # level 2 independent variables and start recording
assert type(a_x) == numpy.ndarray
assert type(a2x) == numpy.ndarray
for j in range(len(x)) :
assert isinstance(x[j], float)
assert isinstance(a_x[j], a_float)
assert isinstance(a2x[j], a2float)
assert a_x[j] == x[j]
assert a2x[j] == x[j]
a_f = adfun(a2x, a2x) # stop level 2 recording
f = adfun(a_x, a_x) # stop level 1 recording
f = adfun(a_x, a_y)
f
will store the
type( a_x[0] )
operation sequence that mapped the independent variable vector
a_x
to the dependent variable vector
a_y
.
a_x
is the numpy.array
returned by the previous call to 6.1: independent
.
Neither the size of
a_x
, or the value it its elements,
may change between calling
a_x = independent(x)
and
f = adfun(a_x, a_y)
The length of the vector
a_x
determines the domain size
n
for the function
y = F(x)
below.
a_y
specifies the dependent variables.
It must be a numpy.array
with one dimension
(i.e., a vector) and with the same type of elements as
a_x
.
The object
f
stores the
type( a_x[0] )
operations
that mapped the vector
a_x
to the vector
a_y
.
The length of the vector
a_y
determines the range size
m
for the function
y = F(x)
below.
f
can be used to evaluate the function
\[
F : \B{R}^n \rightarrow \B{R}^m
\]
and its derivatives, where
y = F(x)
corresponds to the
operation sequence mentioned above.
m
is equal to the length of the vector
a_y
.
n
is equal to the length of the vector
a_x
.
f
is one less than
the AD level for the arguments
a_x
and
a_y
;
i.e., if
type( a_x[0] )
is a_float
(a2float
)
the corresponding AD level for
f
is zero (one).
from pycppad import *
def pycppad_test_adfun() :
# record operations at x = (0, 0, 0)
x = numpy.array( [ 0., 0., 0. ] )
a_x = independent(x) # declare independent variables and start recording
a_y0 = a_x[0];
a_y1 = a_x[0] * a_x[1];
a_y2 = a_x[0] * a_x[1] * a_x[2];
a_y = numpy.array( [ a_y0, a_y1, a_y2 ] )
f = adfun(a_x, a_y) # declare dependent variables and stop recording
# evaluate function at x = (1, 2, 3)
x = numpy.array( [ 1., 2., 3. ] )
y = f.forward(0, x)
assert y[0] == x[0]
assert y[1] == x[0] * x[1]
assert y[2] == x[0] * x[1] * x[2]
abort_recording()
a_x = independent(x)
If such a recording is currently in progress,
this will stop the recording and delete the corresponding information.
Otherwise, abort_recording
has no effect.
from pycppad import *
# Example using a_float ---------------------------------------------------
def pycppad_test_abort_recording() :
from numpy import array
try :
x = numpy.array( [ 1., 2., 3. ] )
a_x = independent(x) # start first level recording
a2_x = independent(a_x) # start second level recording
a_y = array([sum(a_x)]) # record some operations
if a_y[0] > 2 :
raise ValueError
except ValueError :
# Pretend that we are not sure if there are any active recordings
# and use this call to terminate any that may exist.
abort_recording()
a_x = independent(x) # test starting a level 1 recording
a2_x = independent(a_x) # test starting a level 2 recording
a_y = array([sum(a_x)]) # record some level 1 operations
f = adfun(a_x, a_y) # terminate level 1 recording
y = f.forward(0, x) # evaluate the function at original x value
assert( y[0] == 6. ) # check the value
abort_recording() # abort the level 2 recording
y_p = f.forward(p, x_p)
F : \B{R}^n \rightarrow \B{R}^m
to denote the
function corresponding to the adfun
object 6.2.e: f
.
Given the p-th order Taylor expansion for a function
X : \B{R} \rightarrow \B{R}^n
, this function can be used
to compute the p-th order Taylor expansion for the function
Y : \B{R} \rightarrow \B{R}^m
defined by
\[
Y(t) = F [ X(t) ]
\]
k = 0 , \ldots , p
,
we use
x^{(k)}
to denote the value of
x_k
in the
most recent call to
f.forward(k, x_k)
including
x^{(p)}
as the value
x_p
in this call.
We define the function
X(t)
by
\[
X(t) = x^{(0)} + x^{(1)} * t + \cdots + x^{(p)} * t^p
\]
k = 0 , \ldots , p
,
we use
y^{(k)}
to denote the Taylor coefficients
for
Y(t) = F[ X(t) ]
expanded about zero; i.e.,
\[
\begin{array}{rcl}
y^{(k)} & = & Y^{(k)} (0) / k !
\\
Y(t) & = & y^{(0)} + y^{(1)} * t + \cdots + y^{(p)} * t^p + o( t^p )
\end{array}
\]
where
o( t^p ) / t^p \rightarrow 0
as
t \rightarrow 0
.
The coefficient
y^{(p)}
is equal to
the value
y_p
returned by this call.
f
must be an 6.2: adfun
object.
We use 6.2.e.c: level
for the AD 5.1: ad
level of
this object.
p
is a non-negative int
.
It specifies the order of the Taylor coefficient for
Y(t)
that is computed.
x_p
is a numpy.array
with one dimension
(i.e., a vector) with length equal to the domain size 6.2.e.b: n
for the function
f
.
It specifies the p-th order Taylor coefficient for
X(t)
.
If the AD 6.2.e.c: level
for
f
is zero,
all the elements of
x_p
must be either int
or instances
of float
.
If the AD 6.2.e.c: level
for
f
is one,
all the elements of
x_p
must be a_float
objects.
y_p
is a numpy.array
with one dimension
(i.e., a vector) with length equal to the range size 6.2.e.a: m
for the function
f
.
It is set to the p-th order Taylor coefficient for
Y(t)
.
If the AD 6.2.e.c: level
for
f
is zero,
all the elements of
y_p
will be instances of float
.
If the AD 6.2.e.c: level
for
f
is one,
all the elements of
y_p
will be a_float
objects.
6.4.1: forward_0.py | Forward Order Zero: Example and Test |
6.4.2: forward_1.py | Forward Order One: Example and Test |
from pycppad import *
# Example using a_float ----------------------------------------------------
def pycppad_test_forward_0() :
# start record a_float operations
x = numpy.array( [ 2., 3. ] ) # value of independent variables
a_x = independent(x) # declare a_float independent variables
# stop recording and store operations in the function object f
a_y = numpy.array( [ 2. * a_x[0] * a_x[1] ] ) # dependent variables
f = adfun(a_x, a_y) # f(x0, x1) = 2 * x0 * x1
# evaluate the function at a different argument value
p = 0 # order zero for function values
x = numpy.array( [ 3. , 4. ] ) # argument value
fp = f.forward(p, x) # function value
assert fp[0] == 2. * x[0] * x[1] # f(x0, x1) = 2 * x0 * x1
# Example using a2float ----------------------------------------------------
def pycppad_test_forward_0_a2() :
# start record a_float operations
a_x = ad(numpy.array( [ 2., 3. ] )) # a_float value of independent variables
a2x = independent(a_x) # declare a2float independent variables
# stop recording and store operations in the function object f
a2y = numpy.array( [ 2. * a2x[0] * a2x[1] ] ) # dependent variables
a_f = adfun(a2x, a2y) # f(x0, x1) = 2 * x0 * x1
# evaluate the function at a different argument value
p = 0 # order zero for function values
a_x = ad( numpy.array( [ 3. , 4. ] ) ) # argument value
a_fp = a_f.forward(p, a_x) # function value
assert a_fp[0] == 2. * a_x[0] * a_x[1] # f(x0, x1) = 2 * x0 * x1
from pycppad import *
# Example using a_float -----------------------------------------------------
def pycppad_test_forward_1() :
# start record a_float operations
x = numpy.array( [ 2., 3. ] ) # value of independent variables
a_x = independent(x) # declare independent variables
# stop recording and store operations in the function object f
a_y = numpy.array( [ 2. * a_x[0] * a_x[1] ] ) # dependent variables
f = adfun(a_x, a_y) # f(x0, x1) = 2 * x0 * x1
# evaluate the function at a different argument value
p = 0 # order zero for function values
x = numpy.array( [ 3. , 4. ] ) # argument value
fp = f.forward(p, x) # function value
assert fp[0] == 2. * x[0] * x[1] # f(x0, x1) = 2 * x0 * x1
# evalute partial derivative of f(x0, x1) with respect to x0
p = 1 # order one for first derivatives
xp = numpy.array( [ 1. , 0. ] ) # direction for differentiation
fp = f.forward(p, xp) # value of directional derivative
assert fp[0] == 2. * x[1] # f_x0 (x0, x1) = 2 * x1
# evalute partial derivative of f(x0, x1) with respect to x
p = 1
xp = numpy.array( [ 0. , 1. ] ) # the x1 direction
fp = f.forward(p, xp)
assert fp[0] == 2. * x[0] # f_x1 (x0, x1) = 2 * x0
# Example using a2float -----------------------------------------------------
def pycppad_test_forward_1_a2() :
# start record a_float operations
a_x = ad(numpy.array( [ 2., 3. ] )) # a_float value of independent variables
a2x = independent(a_x) # declare a2float independent variables
# stop recording and store operations in the function object f
a2y = numpy.array( [ 2. * a2x[0] * a2x[1] ] ) # dependent variables
a_f = adfun(a2x, a2y) # f(x0, x1) = 2 * x0 * x1
# evaluate the function at a different argument value
p = 0 # order zero for function values
a_x = ad(numpy.array( [ 3. , 4. ] )) # argument value
a_fp = a_f.forward(p, a_x) # function value
assert a_fp[0] == 2. * a_x[0] * a_x[1] # f(x0, x1) = 2 * x0 * x1
# evalute partial derivative of f(x0, x1) with respect to x0
p = 1 # order one for first derivatives
a_xp = ad(numpy.array( [ 1. , 0. ] )) # direction for differentiation
a_fp = a_f.forward(p, a_xp) # value of directional derivative
assert a_fp[0] == 2. * a_x[1] # f_x0 (x0, x1) = 2 * x1
# evalute partial derivative of f(x0, x1) with respect to x
p = 1
a_xp = ad(numpy.array( [ 0. , 1. ] )) # the x1 direction
a_fp = a_f.forward(p, a_xp)
assert a_fp[0] == 2. * a_x[0] # f_x1 (x0, x1) = 2 * x0
dw = f.forward(p, w)
x
.
k = 0 , \ldots , p
,
we use
x^{(k)}
to denote the value of
x_k
in the
most recent call to
f.forward(k, x_k)
We use
F : \B{R}^n \rightarrow \B{R}^m
to denote the
function corresponding to the adfun
object 6.2.e: f
.
X : \B{R} \times \B{R}^n \rightarrow \B{R}^n
by
\[
X(t, u) = u + x^{(0)} + x^{(1)} * t + \cdots + x^{(p-1)} * t^{p-1}
\]
Note that for
k = 0 , \ldots , p - 1
,
\[
x^{(k)} = \frac{1}{k !} \frac{\partial^k}{\partial t^k} X(0, 0)
\]
W : \B{R} \times \B{R}^n \rightarrow \B{R}
is defined by
\[
W(t, u) = w_0 * F_0 [ X(t, u) ] + \cdots + w_{m-1} * F_{m-1} [ X(t, u) ]
\]
We define the function
W_k : \B{R}^n \rightarrow \B{R}
by
\[
W_k ( u ) = \frac{1}{k !} \frac{\partial^k}{\partial t^k} W(0, u)
\]
It follows that
\[
W(t, u ) = W_0 ( u ) + W_1 ( u ) * t + \cdots + W_{p-1} (u) * t^{p-1}
+ o( t^{p-1} )
\]
where
o( t^{p-1} ) / t^{p-1} \rightarrow 0
as
t \rightarrow 0
.
f
must be an 6.2: adfun
object.
We use 6.2.e.c: level
for the AD 5.1: ad
level of
this object.
p
is a non-negative int
.
It specifies the order of the Taylor coefficient
W_{p-1} ( u )
that is differentiated.
Note that
W_{p-1} (u)
corresponds a derivative of order
p-1
of
F(x)
,
so the derivative of
W_{p-1} (u)
corresponds to a derivative
of order
p
of
F(x)
.
w
is a numpy.array
with one dimension
(i.e., a vector) with length equal to the range size 6.2.e.a: m
for the function
f
.
It specifies the weighting vector
w
used in the definition of
W(t, u)
.
If the AD 6.2.e.c: level
for
f
is zero,
all the elements of
w
must be either int
or instances
of float
.
If the AD 6.2.e.c: level
for
f
is one,
all the elements of
w
must be a_float
objects.
v
is a numpy.array
with one dimension
(i.e., a vector) with length equal to the domain size 6.2.e.b: n
for the function
f
.
It is set to the derivative
\[
\begin{array}{rcl}
dw & = & W_{p-1}^{(1)} ( 0 ) \\
& = &
\partial_u \frac{1}{(p-1) !} \frac{\partial^{p-1}}{\partial t^{p-1}} W(0, 0)
\end{array}
\]
If the AD 6.2.e.c: level
for
f
is zero,
all the elements of
dw
will be instances of float
.
If the AD 6.2.e.c: level
for
f
is one,
all the elements of
dw
will be a_float
objects.
p = 1
, we have
\[
\begin{array}{rcl}
dw
& = & \partial_u \frac{1}{0 !} \frac{\partial^0}{\partial t^0} W(0, 0)
\\
& = & \partial_u W(0, 0)
\\
& = &
\partial_u \left[
w_0 * F_0 ( u + x^{(0)} ) + \cdots + w_{m-1} F_{m-1} ( u + x^{(0)} )
\right]_{u = 0}
\\
& = &
w_0 * F_0^{(1)} ( x^{(0)} ) + \cdots + w_{m-1} * F_{m-1}^{(1)} ( x^{(0)} )
\end{array}
\]
p = 2
, we have
\[
\begin{array}{rcl}
dw
& = & \partial_u \frac{1}{1 !} \frac{\partial^1}{\partial t^1} W (0, 0)
\\
& = &
\partial_u \left[
w_0 * F_0^{(1)} ( u + x^{(0)} ) * x^{(1)}
+ \cdots +
w_{m-1} * F_{m-1}^{(1)} ( u + x^{(0)} ) * x^{(1)}
\right]_{u = 0}
\\
& = &
w_0 * ( x^{(1)} )^\R{T} * F_0^{(2)} ( x^{(0)} )
+ \cdots +
w_{m-1} * ( x^{(1)} )^\R{T} * F_{m-1}^{(2)} ( x^{(0)} )
\end{array}
\]
6.5.1: reverse_1.py | Reverse Order One: Example and Test |
6.5.2: reverse_2.py | Reverse Order Two: Example and Test |
from pycppad import *
# Example using a_float ------------------------------------------------------
def pycppad_test_reverse_1():
# start record a_float operations
x = numpy.array( [ 2. , 3. ] ) # value of independent variables
a_x = independent(x) # declare a_float independent variables
# stop recording and store operations in the function object f
a_y = numpy.array( [ 2. * a_x[0] * a_x[1] ] ) # dependent variables
f = adfun(a_x, a_y) # f(x0, x1) = 2 * x0 * x1
# evaluate the function at a different argument value
p = 0 # order zero for function values
x = numpy.array( [ 3. , 4. ] ) # argument value
fp = f.forward(p, x) # function value
assert fp[0] == 2. * x[0] * x[1] # f(x0, x1) = 2 * x0 * x1
# evalute derivative of f(x0, x1)
p = 1 # order one for first derivatives
w = numpy.array( [ 1. ] ) # weight in range space
fp = f.reverse(p, w) # derivaitive of weighted function
assert fp[0] == 2. * x[1] # f_x0 (x0, x1) = 2 * x1
assert fp[1] == 2. * x[0] # f_x1 (x0, x1) = 2 * x0
# Example using a2float ------------------------------------------------------
def pycppad_test_reverse_1_a2():
# start record a_float operations
a_x = ad(numpy.array( [ 2. , 3. ] )) # value of independent variables
a2x = independent(a_x) # declare a2float independent variables
# stop recording and store operations in the function object f
a2y = numpy.array( [ 2. * a2x[0] * a2x[1] ] ) # dependent variables
a_f = adfun(a2x, a2y) # f(x0, x1) = 2 * x0 * x1
# evaluate the function at a different argument value
p = 0 # order zero for function values
a_x = ad(numpy.array( [ 3. , 4. ] )) # argument value
a_fp = a_f.forward(p, a_x) # function value
assert a_fp[0] == 2. * a_x[0] * a_x[1] # f(x0, x1) = 2 * x0 * x1
# evalute derivative of f(x0, x1)
p = 1 # order one for first derivatives
a_w = ad(numpy.array( [ 1. ] )) # weight in range space
a_fp = a_f.reverse(p, a_w) # derivaitive of weighted function
assert a_fp[0] == 2. * a_x[1] # f_x0 (x0, x1) = 2 * x1
assert a_fp[1] == 2. * a_x[0] # f_x1 (x0, x1) = 2 * x0
from pycppad import *
# Example using a_float ------------------------------------------------------
def pycppad_test_reverse_2():
# start record a_float operations
x = numpy.array( [ 2. , 3. ] ) # value of independent variables
a_x = independent(x) # declare a_float independent variables
# stop recording and store operations in the function object f
a_y = numpy.array( [ 2. * a_x[0] * a_x[1] ] ) # dependent variables
f = adfun(a_x, a_y) # f(x0, x1) = 2 * x0 * x1
# evaluate the function at same argument value
p = 0 # derivative order
x_p = x # zero order Taylor coefficient
f_p = f.forward(0, x_p) # function value
assert f_p[0] == 2. * x[0] * x[1] # f(x0, x1) = 2 * x0 * x1
# evalute partial derivative with respect to x[0]
p = 1 # derivative order
x_p = numpy.array( [ 1. , 0 ] ) # first order Taylor coefficient
f_p = f.forward(1, x_p) # partial w.r.t. x0
assert f_p[0] == 2. * x[1] # f_x0 (x0, x1) = 2 * x1
# evaluate derivative of partial w.r.t. x[0]
p = 2 # derivative order
w = numpy.array( [1.] ) # weighting vector
dw = f.reverse(p, w) # derivaitive of weighted function
assert dw[0] == 0. # f_x0_x1 (x0, x1) = 0
assert dw[1] == 2. # f_x0_x1 (x0, x1) = 2
# Example using a2float ------------------------------------------------------
def pycppad_test_reverse_2_a2():
# start record a_float operations
x = numpy.array( [ 2. , 3. ] ) # value of independent variables
a_x = ad(x) # value of independent variables
a2x = independent(a_x) # declare a2float independent variables
# stop recording and store operations in the function object f
a2y = numpy.array( [ 2. * a2x[0] * a2x[1] ] ) # dependent variables
a_f = adfun(a2x, a2y) # f(x0, x1) = 2 * x0 * x1
# evaluate the function at same argument value
p = 0 # derivative order
x_p = a_x # zero order Taylor coefficient
f_p = a_f.forward(0, x_p) # function value
assert f_p[0] == 2. * x[0] * x[1] # f(x0, x1) = 2 * x0 * x1
# evalute partial derivative with respect to x[0]
p = 1 # derivative order
x_p = ad(numpy.array([1. , 0 ])) # first order Taylor coefficient
f_p = a_f.forward(1, x_p) # partial w.r.t. x0
assert f_p[0] == 2. * x[1] # f_x0 (x0, x1) = 2 * x1
# evaluate derivative of partial w.r.t. x[0]
p = 2 # derivative order
w = ad(numpy.array( [1.] )) # weighting vector
dw = a_f.reverse(p, w) # derivaitive of weighted function
assert dw[0] == 0. # f_x0_x1 (x0, x1) = 0
assert dw[1] == 2. # f_x0_x1 (x0, x1) = 2
J = f.jacobian(x)
F^{(1)} (x)
where
F : \B{R}^n \rightarrow \B{R}^m
is the
function corresponding to the adfun
object 6.2.e: f
.
f
must be an 6.2: adfun
object.
We use 6.2.e.c: level
for the AD 5.1: ad
level of
this object.
x
is a numpy.array
with one dimension
(i.e., a vector) with length equal to the domain size 6.2.e.b: n
for the function
f
.
It specifies the argument value at which the derivative is computed.
If the AD 6.2.e.c: level
for
f
is zero,
all the elements of
x
must be either int
or instances
of float
.
If the AD 6.2.e.c: level
for
f
is one,
all the elements of
x
must be a_float
objects.
J
is a numpy.array
with two dimensions
(i.e., a matrix).
The first dimension (row size) is equal to 6.2.e.a: m
(the number of range components in the function
f
).
The second dimension (column size) is equal to 6.2.e.b: n
(the number of domain components in the function
f
).
It is set to the derivative; i.e.,
\[
J = F^{(1)} (x)
\]
If the AD 6.2.e.c: level
for
f
is zero,
all the elements of
J
will be instances of float
.
If the AD 6.2.e.c: level
for
f
is one,
all the elements of
J
will be a_float
objects.
from pycppad import *
# Example using a_float -----------------------------------------------------
def pycppad_test_jacobian():
delta = 10. * numpy.finfo(float).eps
x = numpy.array( [ 0., 0. ] )
a_x = independent(x)
a_y = numpy.array( [
a_x[0] * exp(a_x[1]) ,
a_x[0] * sin(a_x[1]) ,
a_x[0] * cos(a_x[1])
] )
f = adfun(a_x, a_y)
x = numpy.array( [ 2., 3. ] )
J = f.jacobian(x)
assert abs( J[0,0] - exp(x[1]) ) < delta
assert abs( J[0,1] - x[0] * exp(x[1]) ) < delta
assert abs( J[1,0] - sin(x[1]) ) < delta
assert abs( J[1,1] - x[0] * cos(x[1]) ) < delta
assert abs( J[2,0] - cos(x[1]) ) < delta
assert abs( J[2,1] + x[0] * sin(x[1]) ) < delta
# Example using a2float -----------------------------------------------------
def pycppad_test_jacobian_a2():
delta = 10. * numpy.finfo(float).eps
a_x = ad( numpy.array( [ 0., 0. ] ) )
a2x = independent(a_x)
a2y = numpy.array( [
a2x[0] * exp(a2x[1]) ,
a2x[0] * sin(a2x[1]) ,
a2x[0] * cos(a2x[1])
] )
a_f = adfun(a2x, a2y)
x = numpy.array( [2., 3.] )
a_x = ad(x)
a_J = a_f.jacobian(a_x)
assert abs( a_J[0,0] - exp(x[1]) ) < delta
assert abs( a_J[0,1] - x[0] * exp(x[1]) ) < delta
assert abs( a_J[1,0] - sin(x[1]) ) < delta
assert abs( a_J[1,1] - x[0] * cos(x[1]) ) < delta
assert abs( a_J[2,0] - cos(x[1]) ) < delta
assert abs( a_J[2,1] + x[0] * sin(x[1]) ) < delta
H = f.hessian(x, w)
\[
w_0 * F_0 (x) + \cdots + w_{m-1} * F_{m-1} (x)
\]
where
F : \B{R}^n \rightarrow \B{R}^m
is the
function corresponding to the adfun
object 6.2.e: f
.
f
must be an 6.2: adfun
object.
We use 6.2.e.c: level
for the AD 5.1: ad
level of
this object.
x
is a numpy.array
with one dimension
(i.e., a vector) with length equal to the domain size 6.2.e.b: n
for the function
f
.
It specifies the argument value at which the derivative is computed.
If the AD 6.2.e.c: level
for
f
is zero,
all the elements of
x
must be either int
or instances
of float
.
If the AD 6.2.e.c: level
for
f
is one,
all the elements of
x
must be a_float
objects.
w
is a numpy.array
with one dimension
(i.e., a vector) with length equal to the range size 6.2.e.a: m
for the function
f
.
It specifies the argument value at which the derivative is computed.
If the AD 6.2.e.c: level
for
f
is zero,
all the elements of
w
must be either int
or instances
of float
.
If the AD 6.2.e.c: level
for
f
is one,
all the elements of
w
must be a_float
objects.
H
is a numpy.array
with two dimensions
(i.e., a matrix).
Both its first and second dimension size
(row and column size) are equal to 6.2.e.b: n
(the number of domain components in the function
f
).
It is set to the Hessian; i.e.,
\[
H = w_0 * F_0^{(2)} (x) + \cdots + w_{m-1} * F_{m-1}^{(2)} (x)
\]
If the AD 6.2.e.c: level
for
f
is zero,
all the elements of
H
will be instances of float
.
If the AD 6.2.e.c: level
for
f
is one,
all the elements of
H
will be a_float
objects.
from pycppad import *
# Example using a_float -----------------------------------------------------
def pycppad_test_hessian():
delta = 10. * numpy.finfo(float).eps
x = numpy.array( [ 0., 0. ] )
a_x = independent(x)
a_y = numpy.array( [
a_x[0] * exp(a_x[1]) ,
a_x[0] * sin(a_x[1]) ,
a_x[0] * cos(a_x[1])
] )
f = adfun(a_x, a_y)
x = numpy.array( [ 2., 3. ] )
w = numpy.array( [ 0., 1., 0. ] ) # compute Hessian of x0 * sin(x1)
H = f.hessian(x, w)
assert abs( H[0,0] - 0. ) < delta
assert abs( H[0,1] - cos(x[1]) ) < delta
assert abs( H[1,0] - cos(x[1]) ) < delta
assert abs( H[1,1] + x[0] * sin(x[1]) ) < delta
# Example using a2float -----------------------------------------------------
def pycppad_test_hessian_a2():
delta = 10. * numpy.finfo(float).eps
a_x = ad( numpy.array( [ 0., 0. ] ) )
a2x = independent(a_x)
a2y = numpy.array( [
a2x[0] * exp(a2x[1]) ,
a2x[0] * sin(a2x[1]) ,
a2x[0] * cos(a2x[1])
] )
a_f = adfun(a2x, a2y)
x = numpy.array( [ 2., 3. ] )
a_x = ad(x)
a_w = ad( numpy.array( [ 0., 1., 0. ] ) ) # compute Hessian of x0 * sin(x1)
a_H = a_f.hessian(a_x, a_w)
assert abs( a_H[0,0] - 0. ) < delta
assert abs( a_H[0,1] - cos(x[1]) ) < delta
assert abs( a_H[1,0] - cos(x[1]) ) < delta
assert abs( a_H[1,1] + x[0] * sin(x[1]) ) < delta
f.optimize()
f.optimize
procedure reduces the number of operations,
and thereby the time and memory, required to
compute function and derivative values.
f
is an 6.2: adfun
object.
optimize
member function
may greatly reduce the size of the operation sequence corresponding to
f
.
from pycppad import *
import time
# Example using a_float -----------------------------------------------------
def pycppad_test_optimize():
# create function with many variables that are get removed by optimize
n_sum = 10000
x = numpy.array( [ 0. ] )
a_x = independent(x)
a_sum = 0.
for i in range(n_sum) :
a_sum = a_sum + a_x[0];
a_y = numpy.array( [ a_sum ] )
f = adfun(a_x, a_y)
# time for a forward operations before optimize
x = numpy.array( [ 1. ] )
t0 = time.time()
sum_before = f.forward(0, x)
sec_before = time.time() - t0
# time for a forward operations after optimize
f.optimize()
t0 = time.time()
sum_after = f.forward(0, x)
sec_after = time.time() - t0
assert sum_before == float(n_sum)
assert sum_after == float(n_sum)
# expect sec_before to be less than 2 times sec_after
assert( sec_after * 1.5 <= sec_before )
a_float
and a2float
can be used together to compute derivatives of functions that are
defined in terms of derivatives of other functions.
F : \B{R}^2 \rightarrow \B{R}
is defined by
\[
F(u) = u_0^2 + u_1^2
\]
It follows that
\[
\begin{array}{rcl}
\partial_{u(0)} F(u) = 2 * u_0 \\
\partial_{u(1)} F(u) = 2 * u_1
\end{array}
\]
G : \B{R}^2 \rightarrow \B{R}
is
defined by
\[
G(x) = x_1 * \partial_{u(0)} F(x_0 , 1) + x_0 * \partial_{u(1)} F(x_0, 1)
\]
where
\partial{u(j)} F(a, b)
denotes the partial of
F
with respect to
u_j
and evaluated at
u = (a, b)
.
It follows that
\[
\begin{array}{rcl}
G (x) & = & 2 * x_1 * x_0 + 2 * x_0 \\
\partial_{x(0)} G (x) & = & 2 * x_1 + 2 \\
\partial_{x(1)} G (x) & = & 2 * x_0
\end{array}
\]
from pycppad import *
def pycppad_test_two_levels():
# start recording a_float operations
x = numpy.array( [ 2. , 3. ] )
a_x = independent(x)
# start recording a2float operations
a_u = numpy.array( [a_x[0] , ad(1) ] )
a2u = independent(a_u)
# stop a2float recording and store operations if f
a2v = numpy.array( [ a2u[0] * a2u[0] + a2u[1] * a2u[1] ] )
a_f = adfun(a2u, a2v) # F(u0, u1) = u0 * u0 + u1 * u1
# evaluate the gradient of F
a_J = a_f.jacobian(a_u)
# stop a_float recording and store operations in g
a_y = numpy.array( [ a_x[1] * a_J[0,0] + a_x[0] * a_J[0,1] ] )
g = adfun(a_x, a_y) # G(x0, x1) = x1 * F_u0(x0, 1) + x0 * F_u1(x0, 1)
# evaluate the gradient of G
J = g.jacobian(x)
assert J[0,0] == 2. * x[1] + 2
assert J[0,1] == 2. * x[0]
yf = runge_kutta_4(f, ti, yi, dt)
f : \B{R}^n \rightarrow \B{R}^n
,
and a point
yi \in \B{R}^n
such that an unknown function
y : \B{R} \rightarrow \B{R}^n
satisfies the equations
\[
\begin{array}{rcl}
y( ti ) & = & yi \\
y'(t) & = & f[t, y(t) ] \\
\end{array}
\]
We use the Fourth order Runge-Kutta formula (see equation 16.1.2 of
Numerical Recipes in Fortran, 2nd ed.) wish to approximate the value of
\[
yf = y( ti + \Delta t )
\]
t
is a scalar and
y
is a vector with size
n
,
k = f(t, y)
returns a vector of size
n
that is the value of
f(t, y)
at the specified values.
ti
in the problem above.
n
that specifies the value of
yi
in the problem above.
\Delta t
in the problem above.
n
that is the approximation for
y( t + \Delta t )
.
runge_kutta_4
. In this table,
s
and
t
are
scalars,
d
is a decimal number (i.e., a float
)
and
u
and
v
are vectors with size
n
.
operation | result |
d * s
| scalar |
s + t
| scalar |
s * u
|
vector with size
n
|
d * u
|
vector with size
n
|
s * u
|
vector with size
n
|
u + v
|
vector with size
n
|
def runge_kutta_4(f, ti, yi, dt) :
k1 = dt * f(ti , yi)
k2 = dt * f(ti + .5*dt , yi + .5*k1)
k3 = dt * f(ti + .5*dt , yi + .5*k2)
k4 = dt * f(ti + dt , yi + k3)
yf = yi + (1./6.) * ( k1 + 2.*k2 + 2.*k3 + k4 )
return yf
runge_kutta_4
to solve an ODE.
y : \B{R} \rightarrow \B{R}^n
by
\[
y_j (t) = t^{j+1}
\]
It follows that the derivative of
y(t)
satisfies the
8: runge_kutta_4
ODE equation where
y(0) = 0
and
f(t, y)
is given by
\[
f(t , y)_j = y_j '(t) = \left\{ \begin{array}{ll}
1 & {\; \rm if \;} j = 0 \\
(j+1) y_{j-1} (t) & {\; \rm otherwise }
\end{array} \right.
\]
from pycppad import *
def pycppad_test_runge_kutta_4_correct() :
def fun(t , y) :
n = y.size
f = numpy.zeros(n)
f[0] = 1.
index = numpy.array( range(n-1) ) + 1
f[index] = (index + 1) * y[index-1]
return f
n = 5 # size of y(t) (order of method plus 1)
ti = 0. # initial time
dt = 2. # a very large time step size to test correctness
yi = numpy.zeros(n) # initial value for y(t); i.e., y(0)
# take one 4-th order Runge-Kutta integration step of size dt
yf = runge_kutta_4(fun, ti, yi, dt)
# check the results
t_jp = 1. # t^0 at t = dt
for j in range(n-1) :
t_jp = t_jp * dt # t^(j+1) at t = dt
assert abs( yf[j] - t_jp ) < 1e-10 # check yf[j] = t^(j+1)
y : \B{R} \times \B{R} \rightarrow \B{R}^n
by
\[
y_j (x, t) = x t^{j+1}
\]
It follows that the derivative of
y(t)
satisfies the
8: runge_kutta_4
ODE equation where
y(0) = 0
and
f(t, y)
is given by
\[
f(t , y)_j = \partial_t y_j (x, t) = \left\{ \begin{array}{ll}
x & {\; \rm if \;} j = 0 \\
(j+1) y_{j-1} (x, t) & {\; \rm otherwise }
\end{array} \right.
\]
It also follows that
\[
\partial_x y_j (x, t) = t^{j+1}
\]
from pycppad import *
def pycppad_test_runge_kutta_4_ad() :
def fun(t , y) :
n = y.size
f = ad( numpy.zeros(n) )
f[0] = a_x[0]
index = numpy.array( range(n-1) ) + 1
f[index] = (index + 1) * y[index-1]
return f
n = 5 # size of y(t) (order of method plus 1)
ti = 0. # initial time
dt = 2. # a very large time step size (method is exact)
# initial value for y(t); i.e., y(0)
a_yi = ad( numpy.zeros(n) )
# declare a_x to be the independent variable vector
x = numpy.array( [.5] )
a_x = independent( numpy.array( x ) )
# take one 4-th order Runge-Kutta integration step of size dt
a_yf = runge_kutta_4(fun, ti, a_yi, dt)
# define the AD function g : x -> yf
g = adfun(a_x, a_yf)
# compute the derivative of g w.r.t x at x equals .5
dg = g.jacobian(x)
# check the result is as expected
t_jp = 1 # t^0
for j in range(n-1) :
t_jp = t_jp * dt # t^(j+1) at t = dt
assert abs( a_yf[j] - x[0]*t_jp ) < 1e-10 # check yf[j] = x*t^(j+1_
assert abs( dg[j,0] - t_jp ) < 1e-10 # check dg[j] = t^(j+1)
pycppad
function object and then evaluated at much
higher speeds than the Python evaluation.
y : \B{R}^2 \times \B{R} \rightarrow \B{R}^n
by
\[
\begin{array}{rcl}
y(x, 0) & = & x_0
\\
\partial_t y(x, t) & = & x_1 y(x, t)
\end{array}
\]
It follows that
\[
y(x, t) = x_0 \exp ( x_1 t )
\]
Suppose we want to compute values for the function
g : \B{R}^2 \rightarrow \B{R}
defined by
\[
g(x) = y(x, 1)
\]
In this example we compare the execution time for doing this in pure Python
with using a pycppad function object to compute
g(x)
in C++.
from pycppad import *
import time
def pycppad_test_runge_kutta_4_cpp() :
x_1 = 0; # use this variable to switch x_1 between float and ad(float)
def fun(t , y) :
f = x_1 * y
return f
# Number of Runge-Kutta times steps to include in the function object
M = 100
# Start time for recording the pycppad function object
s0 = time.time()
# Declare three independent variables. The operation sequence does not
# depend on x, so we could use any value here.
x = numpy.array( [.1, .1, .1] )
a_x = independent( numpy.array( x ) )
# First independent variables, x[0], is the value of y(0)
a_y = numpy.array( [ a_x[0] ] )
# Make x_1 a variable so can use rk4 with various coefficients.
x_1 = a_x[1]
# Make dt a variable so can use rk4 with various step sizes.
dt = a_x[2]
# f(t, y) does not depend on t, so no need to make t a variable.
t = ad(0.)
# Record the operations for 10 time step
for k in range(M) :
a_y = runge_kutta_4(fun, t, a_y, dt)
t = t + dt
# define the AD function rk4 : x -> y
rk4 = adfun(a_x, a_y)
# amount of time it took to tape this function object
tape_sec = time.time() - s0
# make the fucntion object more efficient
s0 = time.time()
rk4.optimize()
opt_sec = time.time() - s0
ti = 0. # initial time
tf = 1. # final time
N = M * 100 # number of time steps
dt = (tf - ti) / N # size of time step
x_0 = 2. # use this for initial value of y(t)
x_1 = .5 # use this for coefficient in ODE
# python version of integrator with float values
s0 = time.time()
t = ti
y = numpy.array( [ x_0 ] );
for k in range(N) :
y = runge_kutta_4(fun, t, y, dt)
t = t + dt
# number of seconds to solve the ODE using python float
python_sec = time.time() - s0
# check solution is correct
assert( abs( y[0] - x_0 * exp( x_1 * tf ) ) < 1e-10 )
# pycppad function object version of integrator
s0 = time.time()
t = ti
x = numpy.array( [ x_0 , x_1 , dt ] )
for k in range(N/M) :
y = rk4.forward(0, x);
x[0] = y[0];
# number of seconds to solve the ODE using pycppad function object
cpp_sec = time.time() - s0
# check solution is correct
assert( abs( y[0] - x_0 * exp( x_1 * tf ) ) < 1e-10 )
# Uncomment the print statement below to see actual times on your machine
format = 'cpp_sec = %8f,\n'
format = format + 'python_sec/cpp_sec = %5.1f\n'
format = format + 'tape_sec/cpp_sec = %5.1f\n'
format = format + 'opt_sec/cpp_sec = %5.1f'
s = cpp_sec
# print format % (s, python_sec/s, tape_sec/s, opt_sec/s )
# check that C++ is always more than 75 times faster
assert( 75. * cpp_sec <= python_sec )
pycppad.cppad_
was missing form the installation
(after testing).
This is due to be a bug in the python
distutils
(http://docs.python.org/library/distutils.html)
package when the --inplace
option is specified.
The 2.g: install
instructions have been
changed to avoid this problem.
--inplace
flag in
2.e: build
instructions.
with_debugging
option to
2.f: testing
so that test_more.py
does not fail when you build without debugging.
setup.py
script used during the
2: install
process.
This included having the user to download and install CppAD,
instead of having setup.py
do this task.
import cppad_
in 20111017
version.
pycppad
is always more than 100 times faster than
straight python for this test case.
g++
version 4.6.1.
--undef
option is not available to ./setup.py build
so
./setup.py build
was changed to ./setup.py build_ext
in the 2.e: build instructions
.
--inplace
in the example setup.py
command.
This has been fixed in the new
2.e.a: debug build instructions
.
cppad-20100101.5
to using cppad-20110101.2.gpl.tgz
;
see
CppAD whats_new
(http://www.coin-or.org/CppAD/Doc/whats_new.htm) .
external
sub directory of
current working directory, instead of $HOME/install
,
for default directory that holds a local copy of CppAD; see
cppad_parent_dir
under
2.d: Required Setup Information
.
boost_python_include_dir
to list of
2.d: Required Setup Information
.
x
and
y
to all have
the same type.
If this was not the case, pycppad
might crash with out
a useful error message.
This has been fixed.
cppad-20100101.2
to
cppad-20100101.5
.
This fixes a problem when installing pycppad with version 1.44.0 of
boost-python
(http://www.boost.org/doc/libs/1_44_0/libs/python/doc/index.html) .
--inplace
to the
2.e.b: optimized
build instructions.
Improved the 2.g: install
instructions
and the discussion of the
2.h: python path
.
exit
to sys.exit
in
2.f: test_example.py
and test_more.py
(required on some systems).
all
to numpy.all
in that example
(required on some systems).
setup.py
do that it patches the CppAD distribution
(this fixes a problem with the
2.e.b: optimized
build).
cppad-20090909.0
to
cppad-20100101.0
.
setup.py
file
(used during the 2.e: build
step of the install)
had /home/bradbell/install
as the value of cppad_parent_dir
.
This has been changed so that the default is
$HOME/install
.
libboost_python-py26.so
had a problem with
ad(x)
where
x
was of type
int
. This has been fixed.
tanh
is now included in 5.7: std_math
(it was documented but missing from the actual
pycppad implementation before this date).
independent
appears in example code,
it is linked in the following way: 6.1: independent
.
tanh
from standard math functions because
previous release of CppAD did not include it
(a new release of CppAD is being built so that
tanh
can be included in pycppad
).
example/ad_unary.py
had the same name
and hence only one was being run.
This has been fixed.
**
exponentiation operator in 3: get_started.py
.
The pycppad
tests no longer require any package to run; i.e.,
it is no longer necessary to install
-- py.testpycppad
.
pyad-yyyymmdd.tar.gz
to
pycppad-yyyymmdd.tar.gz
.
This is a change in API requiring user code to change
import pyad.cppad
to import pycppad
.
pyad-20090126
.
Add build inplace, and omitting the setup.py
install step,
to 2: install
documentation.
------------------------------------------------------------------------------- pycppad: Python Algorithmic Differentiation Using CppAD Authors: Sebastian F. Walter and Bradley M. Bell. BSD style using http://www.opensource.org/licenses/bsd-license.php template as it was on 2009-01-24 with the following substutions: <YEAR> = 2008-2009 <OWNER> = Bradley M. Bell and Sebastian F. Walter <ORGANIZATION> = contributors' organizations In addition, "Neither the name of the contributors' organizations" was changed to "Neither the names of the contributors' organizations" ------------------------------------------------------------------------------- Copyright (c) 2008-2009, Bradley M. Bell and Seastian F. Walter All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the names of the contributors' organizations nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
A | |
6.3: abort_recording | Abort a Recording of AD Operations |
6.3.1: abort_recording.py | abort_recording: Example and Test |
5.8: abs | Absolute Value Functions |
5.8.1: abs.py | abs: Example and Test |
5.1: ad | Create an Object With One Higher Level of AD |
5.1.1: ad.py | ad: Example and Test |
6: ad_function | AD Function Methods |
5.4: ad_numeric | Binary Numeric Operators With an AD Result |
5.4.1: ad_numeric.py | Binary Numeric Operators With an AD Result: Example and Test |
5.3: ad_unary | Unary Plus and Minus Operators |
5.3.1: ad_unary.py | Unary Plus and Minus Operators: Example and Test |
5: ad_variable | AD Variable Methods |
6.2: adfun | Create an AD Function Object |
6.2.1: adfun.py | adfun: Example and Test |
5.5: assign_op | Computed Assignment Operators |
5.5.1: assign_op.py | Computed Assignment Operators: Example and Test |
C | |
5.6: compare_op | Binary Comparison Operators |
5.6.1: compare_op.py | a_float Comparison Operators: Example and Test |
5.9: condexp | Conditional Expressions |
5.9.1: condexp.py | condexp: Example and Test |
E | |
4: example | List of All the pycppad Examples |
F | |
6.4: forward | Forward Mode: Derivative in One Domain Direction |
6.4.1: forward_0.py | Forward Order Zero: Example and Test |
6.4.2: forward_1.py | Forward Order One: Example and Test |
G | |
3: get_started.py | get_started: Example and Test |
H | |
6.7: hessian | Driver for Computing Hessian in a Range Direction |
6.7.1: hessian.py | Hessian Driver: Example and Test |
I | |
6.1: independent | Create an Independent Variable Vector |
6.1.1: independent.py | independent: Example and Test |
2: install | Installing pycppad |
J | |
6.6: jacobian | Driver for Computing Entire Derivative |
6.6.1: jacobian.py | Entire Derivative: Example and Test |
L | |
10: license | License |
O | |
6.8: optimize | Optimize an AD Function Object Tape |
6.8.1: optimize.py | Optimize Function Object: Example and Test |
P | |
: pycppad | pycppad-20121020: A Python Algorithm Derivative Package |
R | |
6.5: reverse | Reverse Mode: Derivative in One Range Direction |
6.5.1: reverse_1.py | Reverse Order One: Example and Test |
6.5.2: reverse_2.py | Reverse Order Two: Example and Test |
8: runge_kutta_4 | Fourth Order Runge Kutta |
8.2: runge_kutta_4_ad.py | runge_kutta_4 An AD Example and Test |
8.1: runge_kutta_4_correct.py | runge_kutta_4 A Correctness Example and Test |
8.3: runge_kutta_4_cpp.py | runge_kutta_4 With C++ Speed: Example and Test |
S | |
5.7: std_math | Standard Math Unary Functions |
5.7.1: std_math.py | Standard Math Unary Functions: Example and Test |
T | |
7: two_levels.py | Using Two Levels of AD: Example and Test |
V | |
5.2: value | Create an Object With One Lower Level of AD |
5.2.1: value.py | value: Example and Test |
W | |
9.3: whats_new_09 | Extensions, Bug Fixes, and Changes During 2009 |
9.2: whats_new_10 | Extensions, Bug Fixes, and Changes During 2010 |
9.1: whats_new_11 | Extensions, Bug Fixes, and Changes During 2011 |
9: whats_new_12 | Extensions, Bug Fixes, and Changes During 2012 |