Commit 1fb0017a authored by dugupeiwen's avatar dugupeiwen
Browse files

init 0.58

parents
.. _high-level-extending:
High-level extension API
========================
This extension API is exposed through the :mod:`numba.extending` module.
To aid debugging extensions to Numba, it's recommended to set the following
environment variable::
NUMBA_CAPTURED_ERRORS="new_style"
this makes it easy to differentiate between errors in implementation and
acceptable errors that can take part in e.g. type inference. For more
information see :envvar:`NUMBA_CAPTURED_ERRORS`.
Implementing functions
----------------------
The ``@overload`` decorator allows you to implement arbitrary functions
for use in :term:`nopython mode` functions. The function decorated with
``@overload`` is called at compile-time with the *types* of the function's
runtime arguments. It should return a callable representing the
*implementation* of the function for the given types. The returned
implementation is compiled by Numba as if it were a normal function
decorated with ``@jit``. Additional options to ``@jit`` can be passed as
dictionary using the ``jit_options`` argument.
For example, let's pretend Numba doesn't support the :func:`len` function
on tuples yet. Here is how to implement it using ``@overload``::
from numba import types
from numba.extending import overload
@overload(len)
def tuple_len(seq):
if isinstance(seq, types.BaseTuple):
n = len(seq)
def len_impl(seq):
return n
return len_impl
You might wonder, what happens if :func:`len()` is called with something
else than a tuple? If a function decorated with ``@overload`` doesn't
return anything (i.e. returns None), other definitions are tried until
one succeeds. Therefore, multiple libraries may overload :func:`len()`
for different types without conflicting with each other.
Implementing methods
--------------------
The ``@overload_method`` decorator similarly allows implementing a
method on a type well-known to Numba.
.. autofunction:: numba.core.extending.overload_method
Implementing classmethods
-------------------------
The ``@overload_classmethod`` decorator similarly allows implementing a
classmethod on a type well-known to Numba.
.. autofunction:: numba.core.extending.overload_classmethod
Implementing attributes
-----------------------
The ``@overload_attribute`` decorator allows implementing a data
attribute (or property) on a type. Only reading the attribute is
possible; writable attributes are only supported through the
:ref:`low-level API <low-level-extending>`.
The following example implements the :attr:`~numpy.ndarray.nbytes` attribute
on Numpy arrays::
@overload_attribute(types.Array, 'nbytes')
def array_nbytes(arr):
def get(arr):
return arr.size * arr.itemsize
return get
.. _cython-support:
Importing Cython Functions
--------------------------
The function ``get_cython_function_address`` obtains the address of a
C function in a Cython extension module. The address can be used to
access the C function via a :func:`ctypes.CFUNCTYPE` callback, thus
allowing use of the C function inside a Numba jitted function. For
example, suppose that you have the file ``foo.pyx``::
from libc.math cimport exp
cdef api double myexp(double x):
return exp(x)
You can access ``myexp`` from Numba in the following way::
import ctypes
from numba.extending import get_cython_function_address
addr = get_cython_function_address("foo", "myexp")
functype = ctypes.CFUNCTYPE(ctypes.c_double, ctypes.c_double)
myexp = functype(addr)
The function ``myexp`` can now be used inside jitted functions, for
example::
@njit
def double_myexp(x):
return 2*myexp(x)
One caveat is that if your function uses Cython's fused types, then
the function's name will be mangled. To find out the mangled name of
your function you can check the extension module's ``__pyx_capi__``
attribute.
Implementing intrinsics
-----------------------
The ``@intrinsic`` decorator is used for marking a function *func* as typing and
implementing the function in ``nopython`` mode using the
`llvmlite IRBuilder API <http://llvmlite.pydata.org/en/latest/user-guide/ir/ir-builder.html>`_.
This is an escape hatch for expert users to build custom LLVM IR that will be
inlined into the caller, there is no safety net!
The first argument to *func* is the typing context. The rest of the arguments
corresponds to the type of arguments of the decorated function. These arguments
are also used as the formal argument of the decorated function. If *func* has
the signature ``foo(typing_context, arg0, arg1)``, the decorated function will
have the signature ``foo(arg0, arg1)``.
The return values of *func* should be a 2-tuple of expected type signature, and
a code-generation function that will passed to
:func:`~numba.targets.imputils.lower_builtin`. For an unsupported operation,
return ``None``.
Here is an example that cast any integer to a byte pointer::
from numba import types
from numba.extending import intrinsic
@intrinsic
def cast_int_to_byte_ptr(typingctx, src):
# check for accepted types
if isinstance(src, types.Integer):
# create the expected type signature
result_type = types.CPointer(types.uint8)
sig = result_type(types.uintp)
# defines the custom code generation
def codegen(context, builder, signature, args):
# llvm IRBuilder code here
[src] = args
rtype = signature.return_type
llrtype = context.get_value_type(rtype)
return builder.inttoptr(src, llrtype)
return sig, codegen
it may be used as follows::
from numba import njit
@njit('void(int64)')
def foo(x):
y = cast_int_to_byte_ptr(x)
foo.inspect_types()
and the output of ``.inspect_types()`` demonstrates the cast (note the
``uint8*``)::
def foo(x):
# x = arg(0, name=x) :: int64
# $0.1 = global(cast_int_to_byte_ptr: <intrinsic cast_int_to_byte_ptr>) :: Function(<intrinsic cast_int_to_byte_ptr>)
# $0.3 = call $0.1(x, func=$0.1, args=[Var(x, check_intrin.py (24))], kws=(), vararg=None) :: (uint64,) -> uint8*
# del x
# del $0.1
# y = $0.3 :: uint8*
# del y
# del $0.3
# $const0.4 = const(NoneType, None) :: none
# $0.5 = cast(value=$const0.4) :: none
# del $const0.4
# return $0.5
y = cast_int_to_byte_ptr(x)
Implementing mutable structures
-------------------------------
.. warning:: This is an experimental feature, the API may change without warning.
The ``numba.experimental.structref`` module provides utilities for defining
mutable pass-by-reference structures, a ``StructRef``. The following example
demonstrates how to define a basic mutable structure:
Defining a StructRef
''''''''''''''''''''
.. literalinclude:: ../../../numba/tests/doc_examples/test_structref_usage.py
:language: python
:caption: from ``numba/tests/doc_examples/test_structref_usage.py``
:start-after: magictoken.ex_structref_type_definition.begin
:end-before: magictoken.ex_structref_type_definition.end
:dedent: 0
:linenos:
The following demonstrates using the above mutable struct definition:
.. literalinclude:: ../../../numba/tests/doc_examples/test_structref_usage.py
:language: python
:caption: from ``test_type_definition`` of ``numba/tests/doc_examples/test_structref_usage.py``
:start-after: magictoken.ex_structref_type_definition_test.begin
:end-before: magictoken.ex_structref_type_definition_test.end
:dedent: 8
:linenos:
Defining a method on StructRef
''''''''''''''''''''''''''''''
Methods and attributes can be attached using ``@overload_*`` as shown in the
previous sections.
The following demonstrates the use of ``@overload_method`` to insert a
method for instances of ``MyStructType``:
.. literalinclude:: ../../../numba/tests/doc_examples/test_structref_usage.py
:language: python
:caption: from ``test_overload_method`` of ``numba/tests/doc_examples/test_structref_usage.py``
:start-after: magictoken.ex_structref_method.begin
:end-before: magictoken.ex_structref_method.end
:dedent: 8
:linenos:
``numba.experimental.structref`` API Reference
''''''''''''''''''''''''''''''''''''''''''''''
.. automodule:: numba.experimental.structref
:members:
Determining if a function is already wrapped by a ``jit`` family decorator
--------------------------------------------------------------------------
The following function is provided for this purpose.
.. automethod:: numba.extending.is_jitted
Extending Numba
===============
.. module:: numba.extending
This chapter describes how to extend Numba to make it recognize and support
additional operations, functions or types. Numba provides two categories
of APIs to this end:
* The high-level APIs provide abstracted entry points which are sufficient
for simple uses. They require little knowledge of Numba's internal
compilation chain.
* The low-level APIs reflect Numba's internal compilation chain and allow
flexible interaction with its various layers, but require more effort
and experience with Numba internals.
It may be helpful for readers of this chapter to also read some of the
documents in the :doc:`developer manual <../developer/index>`, especially
the :doc:`architecture document <../developer/architecture>`.
.. toctree::
high-level.rst
low-level.rst
interval-example.rst
overloading-guide.rst
entrypoints.rst
Example: An Interval Type
=========================
In this example, we will extend the Numba frontend to add support for a user-defined
class that it does not internally support. This will allow:
* Passing an instance of the class to a Numba function
* Accessing attributes of the class in a Numba function
* Constructing and returning a new instance of the class from a Numba function
(all the above in :term:`nopython mode`)
We will mix APIs from the :ref:`high-level extension API <high-level-extending>`
and the :ref:`low-level extension API <low-level-extending>`, depending on what is
available for a given task.
The starting point for our example is the following pure Python class:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_py_class.begin
:end-before: magictoken.interval_py_class.end
:dedent: 8
Extending the typing layer
""""""""""""""""""""""""""
Creating a new Numba type
-------------------------
As the ``Interval`` class is not known to Numba, we must create a new Numba
type to represent instances of it. Numba does not deal with Python types
directly: it has its own type system that allows a different level of
granularity as well as various meta-information not available with regular
Python types.
We first create a type class ``IntervalType`` and, since we don't need the
type to be parametric, we instantiate a single type instance ``interval_type``:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_type_class.begin
:end-before: magictoken.interval_type_class.end
:dedent: 8
Type inference for Python values
--------------------------------
In itself, creating a Numba type doesn't do anything. We must teach Numba
how to infer some Python values as instances of that type. In this example,
it is trivial: any instance of the ``Interval`` class should be treated as
belonging to the type ``interval_type``:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_typeof_register.begin
:end-before: magictoken.interval_typeof_register.end
:dedent: 8
Function arguments and global values will thusly be recognized as belonging
to ``interval_type`` whenever they are instances of ``Interval``.
Type inference for Python annotations
-------------------------------------
While ``typeof`` is used to infer the Numba type of Python objects,
``as_numba_type`` is used to infer the Numba type of Python types. For simple
cases, we can simply register that the Python type ``Interval`` corresponds with
the Numba type ``interval_type``:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.numba_type_register.begin
:end-before: magictoken.numba_type_register.end
:dedent: 8
Note that ``as_numba_type`` is only used to infer types from type annotations at
compile time. The ``typeof`` registry above is used to infer the type of
objects at runtime.
Type inference for operations
-----------------------------
We want to be able to construct interval objects from Numba functions, so
we must teach Numba to recognize the two-argument ``Interval(lo, hi)``
constructor. The arguments should be floating-point numbers:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.numba_type_callable.begin
:end-before: magictoken.numba_type_callable.end
:dedent: 8
The :func:`type_callable` decorator specifies that the decorated function
should be invoked when running type inference for the given callable object
(here the ``Interval`` class itself). The decorated function must simply
return a typer function that will be called with the argument types. The
reason for this seemingly convoluted setup is for the typer function to have
*exactly* the same signature as the typed callable. This allows handling
keyword arguments correctly.
The *context* argument received by the decorated function is useful in
more sophisticated cases where computing the callable's return type
requires resolving other types.
Extending the lowering layer
""""""""""""""""""""""""""""
We have finished teaching Numba about our type inference additions.
We must now teach Numba how to actually generate code and data for
the new operations.
Defining the data model for native intervals
--------------------------------------------
As a general rule, :term:`nopython mode` does not work on Python objects
as they are generated by the CPython interpreter. The representations
used by the interpreter are far too inefficient for fast native code.
Each type supported in :term:`nopython mode` therefore has to define
a tailored native representation, also called a *data model*.
A common case of data model is an immutable struct-like data model, that
is akin to a C ``struct``. Our interval datatype conveniently falls in
that category, and here is a possible data model for it:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_model.begin
:end-before: magictoken.interval_model.end
:dedent: 8
This instructs Numba that values of type ``IntervalType`` (or any instance
thereof) are represented as a structure of two fields ``lo`` and ``hi``,
each of them a double-precision floating-point number (``types.float64``).
.. note::
Mutable types need more sophisticated data models to be able to
persist their values after modification. They typically cannot be
stored and passed on the stack or in registers like immutable types do.
Exposing data model attributes
------------------------------
We want the data model attributes ``lo`` and ``hi`` to be exposed under
the same names for use in Numba functions. Numba provides a convenience
function to do exactly that:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_attribute_wrapper.begin
:end-before: magictoken.interval_attribute_wrapper.end
:dedent: 8
This will expose the attributes in read-only mode. As mentioned above,
writable attributes don't fit in this model.
Exposing a property
-------------------
As the ``width`` property is computed rather than stored in the structure,
we cannot simply expose it like we did for ``lo`` and ``hi``. We have to
re-implement it explicitly:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_overload_attribute.begin
:end-before: magictoken.interval_overload_attribute.end
:dedent: 8
You might ask why we didn't need to expose a type inference hook for this
attribute? The answer is that ``@overload_attribute`` is part of the
high-level API: it combines type inference and code generation in a
single API.
Implementing the constructor
----------------------------
Now we want to implement the two-argument ``Interval`` constructor:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_lower_builtin.begin
:end-before: magictoken.interval_lower_builtin.end
:dedent: 8
There is a bit more going on here. ``@lower_builtin`` decorates the
implementation of the given callable or operation (here the ``Interval``
constructor) for some specific argument types. This allows defining
type-specific implementations of a given operation, which is important
for heavily overloaded functions such as :func:`len`.
``types.Float`` is the class of all floating-point types (``types.float64``
is an instance of ``types.Float``). It is generally more future-proof
to match argument types on their class rather than on specific instances
(however, when *returning* a type -- chiefly during the type inference
phase --, you must usually return a type instance).
``cgutils.create_struct_proxy()`` and ``interval._getvalue()`` are a bit
of boilerplate due to how Numba passes values around. Values are passed
as instances of :class:`llvmlite.ir.Value`, which can be too limited:
LLVM structure values especially are quite low-level. A struct proxy
is a temporary wrapper around a LLVM structure value allowing to easily
get or set members of the structure. The ``_getvalue()`` call simply
gets the LLVM value out of the wrapper.
Boxing and unboxing
-------------------
If you try to use an ``Interval`` instance at this point, you'll certainly
get the error *"cannot convert Interval to native value"*. This is because
Numba doesn't yet know how to make a native interval value from a Python
``Interval`` instance. Let's teach it how to do it:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_unbox.begin
:end-before: magictoken.interval_unbox.end
:dedent: 8
*Unbox* is the other name for "convert a Python object to a native value"
(it fits the idea of a Python object as a sophisticated box containing
a simple native value). The function returns a ``NativeValue`` object
which gives its caller access to the computed native value, the error bit
and possibly other information.
The snippet above makes abundant use of the ``c.pyapi`` object, which
gives access to a subset of the
`Python interpreter's C API <https://docs.python.org/3/c-api/index.html>`_.
Note the use of ``early_exit_if_null`` to detect and handle any errors that
may have happened when unboxing the object (try passing ``Interval('a', 'b')``
for example).
We also want to do the reverse operation, called *boxing*, so as to return
interval values from Numba functions:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_box.begin
:end-before: magictoken.interval_box.end
:dedent: 8
Using it
""""""""
:term:`nopython mode` functions are now able to make use of Interval objects
and the various operations you have defined on them. You can try for
example the following functions:
.. literalinclude:: ../../../numba/tests/doc_examples/test_interval_example.py
:language: python
:start-after: magictoken.interval_usage.begin
:end-before: magictoken.interval_usage.end
:dedent: 8
Conclusion
""""""""""
We have shown how to do the following tasks:
* Define a new Numba type class by subclassing the ``Type`` class
* Define a singleton Numba type instance for a non-parametric type
* Teach Numba how to infer the Numba type of Python values of a certain class,
using ``typeof_impl.register``
* Teach Numba how to infer the Numba type of the Python type itself, using
``as_numba_type.register``
* Define the data model for a Numba type using ``StructModel``
and ``register_model``
* Implementing a boxing function for a Numba type using the ``@box`` decorator
* Implementing an unboxing function for a Numba type using the ``@unbox`` decorator
and the ``NativeValue`` class
* Type and implement a callable using the ``@type_callable`` and
``@lower_builtin`` decorators
* Expose a read-only structure attribute using the ``make_attribute_wrapper``
convenience function
* Implement a read-only property using the ``@overload_attribute`` decorator
.. _low-level-extending:
Low-level extension API
=======================
This extension API is available through the :mod:`numba.extending` module.
It allows you to hook directly into the Numba compilation chain. As such,
it distinguished between several compilation phases:
* The :term:`typing` phase deduces the types of variables in a compiled
function by looking at the operations performed.
* The :term:`lowering` phase converts high-level Python operations into
low-level LLVM code. This phase exploits the typing information derived
by the typing phase.
* *Boxing* and *unboxing* convert Python objects into native values, and
vice-versa. They occur at the boundaries of calling a Numba function
from the Python interpreter.
Typing
------
.. XXX the API described here can be insufficient for some use cases.
Should we describe the whole templates menagerie?
Type inference -- or simply *typing* -- is the process of assigning
Numba types to all values involved in a function, so as to enable
efficient code generation. Broadly speaking, typing comes in two flavours:
typing plain Python *values* (e.g. function arguments or global variables)
and typing *operations* (or *functions*) on known value types.
.. decorator:: typeof_impl.register(cls)
Register the decorated function as typing Python values of class *cls*.
The decorated function will be called with the signature ``(val, c)``
where *val* is the Python value being typed and *c* is a context
object.
.. decorator:: type_callable(func)
Register the decorated function as typing the callable *func*.
*func* can be either an actual Python callable or a string denoting
a operation internally known to Numba (for example ``'getitem'``).
The decorated function is called with a single *context* argument
and must return a typer function. The typer function should have
the same signature as the function being typed, and it is called
with the Numba *types* of the function arguments; it should return
either the Numba type of the function's return value, or ``None``
if inference failed.
.. function:: as_numba_type.register(py_type, numba_type)
Register that the Python type *py_type* corresponds with the Numba type
*numba_type*. This can be used to register a new type or overwrite the
existing default (e.g. to treat ``float`` as ``numba.float32`` instead of
``numba.float64``).
This function can also be used as a decorator. It registers the decorated
function as a type inference function used by ``as_numba_type`` when trying
to infer the Numba type of a Python type. The decorated function is called
with a single *py_type* argument and returns either a corresponding Numba
type, or ``None`` if it cannot infer that *py_type*.
Lowering
--------
The following decorators all take a type specification of some kind.
A type specification is usually a type class (such as ``types.Float``)
or a specific type instance (such as ``types.float64``). Some values
have a special meaning:
* ``types.Any`` matches any type; this allows doing your own dispatching
inside the implementation
* ``types.VarArg(<some type>)`` matches any number of arguments of the
given type; it can only appear as the last type specification when
describing a function's arguments.
A *context* argument in the following APIs is a target context providing
various utility methods for code generation (such as creating a constant,
converting from a type to another, looking up the implementation of a
specific function, etc.). A *builder* argument is a
:class:`llvmlite.ir.IRBuilder` instance for the LLVM code being generated.
A *signature* is an object specifying the concrete type of an operation.
The ``args`` attribute of the signature is a tuple of the argument types.
The ``return_type`` attribute of the signature is the type that the
operation should return.
.. note::
Numba always reasons on Numba types, but the values being passed
around during lowering are LLVM values: they don't hold the required
type information, which is why Numba types are passed explicitly too.
LLVM has its own, very low-level type system: you can access the LLVM
type of a value by looking up its ``.type`` attribute.
Native operations
'''''''''''''''''
.. decorator:: lower_builtin(func, typespec, ...)
Register the decorated function as implementing the callable *func*
for the arguments described by the given Numba *typespecs*.
As with :func:`type_callable`, *func* can be either an actual Python
callable or a string denoting a operation internally known to Numba
(for example ``'getitem'``).
The decorated function is called with four arguments
``(context, builder, sig, args)``. ``sig`` is the concrete signature
the callable is being invoked with. ``args`` is a tuple of the values
of the arguments the callable is being invoked with; each value in
``args`` corresponds to a type in ``sig.args``. The function
must return a value compatible with the type ``sig.return_type``.
.. decorator:: lower_getattr(typespec, name)
Register the decorated function as implementing the attribute *name*
of the given *typespec*. The decorated function is called with four
arguments ``(context, builder, typ, value)``. *typ* is the concrete
type the attribute is being looked up on. *value* is the value the
attribute is being looked up on.
.. decorator:: lower_getattr_generic(typespec)
Register the decorated function as a fallback for attribute lookup
on a given *typespec*. Any attribute that does not have a corresponding
:func:`lower_getattr` declaration will go through
:func:`lower_getattr_generic`. The decorated function is called with
five arguments ``(context, builder, typ, value, name)``. *typ*
and *value* are as in :func:`lower_getattr`. *name* is the name
of the attribute being looked up.
.. decorator:: lower_cast(fromspec, tospec)
Register the decorated function as converting from types described by
*fromspec* to types described by *tospec*. The decorated function
is called with five arguments ``(context, builder, fromty, toty, value)``.
*fromty* and *toty* are the concrete types being converted from and to,
respectively. *value* is the value being converted. The function
must return a value compatible with the type ``toty``.
Constants
'''''''''
.. decorator:: lower_constant(typespec)
Register the decorated function as implementing the creation of
constants for the Numba *typespec*. The decorated function
is called with four arguments ``(context, builder, ty, pyval)``.
*ty* is the concrete type to create a constant for. *pyval*
is the Python value to convert into a LLVM constant.
The function must return a value compatible with the type ``ty``.
Boxing and unboxing
'''''''''''''''''''
In these functions, *c* is a convenience object with several attributes:
* its ``context`` attribute is a target context as above
* its ``builder`` attribute is a :class:`llvmlite.ir.IRBuilder` as above
* its ``pyapi`` attribute is an object giving access to a subset of the
`Python interpreter's C API <https://docs.python.org/3/c-api/index.html>`_
An object, as opposed to a native value, is a ``PyObject *`` pointer.
Such pointers can be produced or processed by the methods in the ``pyapi``
object.
.. decorator:: box(typespec)
Register the decorated function as boxing values matching the *typespec*.
The decorated function is called with three arguments ``(typ, val, c)``.
*typ* is the concrete type being boxed. *val* is the value being
boxed. The function should return a Python object, or NULL to signal
an error.
.. decorator:: unbox(typespec)
Register the decorated function as unboxing values matching the *typespec*.
The decorated function is called with three arguments ``(typ, obj, c)``.
*typ* is the concrete type being unboxed. *obj* is the Python object
(a ``PyObject *`` pointer, in C terms) being unboxed. The function
should return a ``NativeValue`` object giving the unboxing result value
and an optional error bit.
import numpy as np
from numba import njit, types
from numba.extending import overload, register_jitable
from numba.core.errors import TypingError
import scipy.linalg
@register_jitable
def _oneD_norm_2(a):
# re-usable implementation of the 2-norm
val = np.abs(a)
return np.sqrt(np.sum(val * val))
@overload(scipy.linalg.norm)
def jit_norm(a, ord=None):
if isinstance(ord, types.Optional):
ord = ord.type
# Reject non integer, floating-point or None types for ord
if not isinstance(ord, (types.Integer, types.Float, types.NoneType)):
raise TypingError("'ord' must be either integer or floating-point")
# Reject non-ndarray types
if not isinstance(a, types.Array):
raise TypingError("Only accepts NumPy ndarray")
# Reject ndarrays with non integer or floating-point dtype
if not isinstance(a.dtype, (types.Integer, types.Float)):
raise TypingError("Only integer and floating point types accepted")
# Reject ndarrays with unsupported dimensionality
if not (0 <= a.ndim <= 2):
raise TypingError('3D and beyond are not allowed')
# Implementation for scalars/0d-arrays
elif a.ndim == 0:
return a.item()
# Implementation for vectors
elif a.ndim == 1:
def _oneD_norm_x(a, ord=None):
if ord == 2 or ord is None:
return _oneD_norm_2(a)
elif ord == np.inf:
return np.max(np.abs(a))
elif ord == -np.inf:
return np.min(np.abs(a))
elif ord == 0:
return np.sum(a != 0)
elif ord == 1:
return np.sum(np.abs(a))
else:
return np.sum(np.abs(a)**ord)**(1. / ord)
return _oneD_norm_x
# Implementation for matrices
elif a.ndim == 2:
def _two_D_norm_2(a, ord=None):
return _oneD_norm_2(a.ravel())
return _two_D_norm_2
if __name__ == "__main__":
@njit
def use(a, ord=None):
# simple test function to check that the overload works
return scipy.linalg.norm(a, ord)
# spot check for vectors
a = np.arange(10)
print(use(a))
print(scipy.linalg.norm(a))
# spot check for matrices
b = np.arange(9).reshape((3, 3))
print(use(b))
print(scipy.linalg.norm(b))
.. _overloading-guide:
==============================
A guide to using ``@overload``
==============================
As mentioned in the :ref:`high-level extension API <high-level-extending>`, you
can use the ``@overload`` decorator to create a Numba implementation of a
function that can be used in :term:`nopython mode` functions. A common use case
is to re-implement NumPy functions so that they can be called in ``@jit``
decorated code. This section discusses how and when to use the ``@overload``
decorator and what contributing such a function to the Numba code base might
entail. This should help you get started when needing to use the ``@overload``
decorator or when attempting to contribute new functions to Numba itself.
The ``@overload`` decorator and it's variants are useful when you have a
third-party library that you do not control and you wish to provide Numba
compatible implementations for specific functions from that library.
Concrete Example
================
Let's assume that you are working on a minimization algorithm that makes use of
|scipy.linalg.norm|_ to find different vector norms and the `frobenius
norm <https://en.wikipedia.org/wiki/Frobenius_inner_product>`_ for matrices.
You know that only integer and real numbers will be involved. (While this may
sound like an artificial example, especially because a Numba implementation of
``numpy.linalg.norm`` exists, it is largely pedagogical and serves to
illustrate how and when to use ``@overload``).
.. |scipy.linalg.norm| replace:: ``scipy.linalg.norm``
.. _scipy.linalg.norm: https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.norm.html
The skeleton might look something like this::
def algorithm():
# setup
v = ...
while True:
# take a step
d = scipy.linalg.norm(v)
if d < tolerance:
break
Now, let's further assume, that you have heard of Numba and you now wish to use
it to accelerate your function. However, after adding the
``jit(nopython=True)``
decorator, Numba complains that ``scipy.linalg.norm`` isn't supported. From
looking at the documentation, you realize that a norm is probably fairly easy
to implement using NumPy. A good starting point is the following template.
.. literalinclude:: template.py
After some deliberation and tinkering, you end up with the following code:
.. literalinclude:: mynorm.py
As you can see, the implementation only supports what you need right now:
* Only supports integer and floating-point types
* All vector norms
* Only the Frobenius norm for matrices
* Code sharing between vector and matrix implementations using
``@register_jitable``.
* Norms are implemented using NumPy syntax. (This is possible because
Numba is very aware of NumPy and many functions are supported.)
So what actually happens here? The ``overload`` decorator registers a suitable
implementation for ``scipy.linalg.norm`` in case a call to this is encountered
in code that is being JIT-compiled, for example when you decorate your
``algorithm`` function with ``@jit(nopython=True)``. In that case, the function
``jit_norm`` will be called with the currently encountered types and will then
return either ``_oneD_norm_x`` in the vector case and ``_two_D_norm_2``.
You can download the example code here: :download:`mynorm.py </extending/mynorm.py>`
Implementing ``@overload`` for NumPy functions
==============================================
Numba supports NumPy through the provision of ``@jit`` compatible
re-implementations of NumPy functions. In such cases ``@overload`` is a very
convenient option for writing such implementations, however there are a few
additional things to watch out for.
* The Numba implementation should match the NumPy implementation as closely as
feasible with respect to accepted types, arguments, raised exceptions and
algorithmic complexity (Big-O / Landau order).
* When implementing supported argument types, bear in mind that, due to
duck typing, NumPy does tend to accept a multitude of argument types beyond
NumPy arrays such as scalar, list, tuple, set, iterator, generator etc.
You will need to account for that during type inference and subsequently as
part of the tests.
* A NumPy function may return a scalar, array or a data structure
which matches one of its inputs, you need to be aware of type
unification problems and dispatch to appropriate implementations. For
example, |np.corrcoef|_ may return an array or a scalar depending on its
inputs.
.. |np.corrcoef| replace:: ``np.corrcoef``
.. _np.corrcoef: https://docs.scipy.org/doc/numpy/reference/generated/numpy.corrcoef.html
* If you are implementing a new function, you should always update the
`documentation
<https://numba.pydata.org/numba-doc/latest/reference/numpysupported.html>`_.
The sources can be found in ``docs/source/reference/numpysupported.rst``. Be
sure to mention any limitations that your implementation has, e.g. no support
for the ``axis`` keyword.
* When writing tests for the functionality itself, it's useful to include
handling of non-finite values, arrays with different shapes and layouts,
complex inputs, scalar inputs, inputs with types for which support is not
documented (e.g. a function which the NumPy docs say requires a float or int
input might also 'work' if given a bool or complex input).
* When writing tests for exceptions, for example if adding tests to
``numba/tests/test_np_functions.py``, you may encounter the following error
message:
.. code::
======================================================================
FAIL: test_foo (numba.tests.test_np_functions.TestNPFunctions)
----------------------------------------------------------------------
Traceback (most recent call last):
File "<path>/numba/numba/tests/support.py", line 645, in tearDown
self.memory_leak_teardown()
File "<path>/numba/numba/tests/support.py", line 619, in memory_leak_teardown
self.assert_no_memory_leak()
File "<path>/numba/numba/tests/support.py", line 628, in assert_no_memory_leak
self.assertEqual(total_alloc, total_free)
AssertionError: 36 != 35
This occurs because raising exceptions from jitted code leads to reference
leaks. Ideally, you will place all exception testing in a separate test
method and then add a call in each test to ``self.disable_leak_check()`` to
disable the leak-check (inherit from ``numba.tests.support.TestCase`` to make
that available).
* For many of the functions that are available in NumPy, there are
corresponding methods defined on the NumPy ``ndarray`` type. For example, the
function ``repeat`` is available as a NumPy module level function and a
member function on the ``ndarray`` class.
.. code:: python
import numpy as np
a = np.arange(10)
# function
np.repeat(a, 10)
# method
a.repeat(10)
Once you have written the function implementation, you can easily use
``@overload_method`` and reuse it. Just be sure to check that NumPy doesn't
diverge in the implementations of its function/method.
As an example, the ``repeat`` function/method:
.. code:: python
@extending.overload_method(types.Array, 'repeat')
def array_repeat(a, repeats):
def array_repeat_impl(a, repeat):
# np.repeat has already been overloaded
return np.repeat(a, repeat)
return array_repeat_impl
* If you need to create ancillary functions, for example to re-use a small
utility function or to split your implementation across functions for the
sake of readability, you can make use of the ``@register_jitable`` decorator.
This will make those functions available from within your ``@jit`` and
``@overload`` decorated functions.
* The Numba continuous integration (CI) set up tests a wide variety of NumPy
versions, you'll sometimes be alerted to a change in behaviour from some
previous NumPy version. If you can find supporting evidence in the NumPy
change log / repository, then you'll need to decide whether to create
branches and attempt to replicate the logic across versions, or use a version
gate (with associated wording in the documentation) to advertise that Numba
replicates NumPy from some particular version onwards.
* You can look at the Numba source code for inspiration, many of the overloaded
NumPy functions and methods are in ``numba/targets/arrayobj.py``. Below, you
will find a list of implementations to look at that are well implemented in
terms of accepted types and test coverage.
* ``np.repeat``
# Declare that function `myfunc` is going to be overloaded (have a
# substitutable Numba implementation)
@overload(myfunc)
# Define the overload function with formal arguments
# these arguments must be matched in the inner function implementation
def jit_myfunc(arg0, arg1, arg2, ...):
# This scope is for typing, access is available to the *type* of all
# arguments. This information can be used to change the behaviour of the
# implementing function and check that the types are actually supported
# by the implementation.
print(arg0) # this will show the Numba type of arg0
# This is the definition of the function that implements the `myfunc` work.
# It does whatever algorithm is needed to implement myfunc.
def myfunc_impl(arg0, arg1, arg2, ...): # match arguments to jit_myfunc
# < Implementation goes here >
return # whatever needs to be returned by the algorithm
# return the implementation
return myfunc_impl
Glossary
========
.. glossary::
ahead-of-time compilation
AOT compilation
AOT
Compilation of a function in a separate step before running the
program code, producing an on-disk binary object which can be distributed
independently. This is the traditional kind of compilation known
in languages such as C, C++ or Fortran.
bytecode
Python bytecode
The original form in which Python functions are executed. Python
bytecode describes a stack-machine executing abstract (untyped)
operations using operands from both the function stack and the
execution environment (e.g. global variables).
compile-time constant
An expression whose value Numba can infer and freeze at compile-time.
Global variables and closure variables are compile-time constants.
just-in-time compilation
JIT compilation
JIT
Compilation of a function at execution time, as opposed to
:term:`ahead-of-time compilation`.
JIT function
Shorthand for "a function :term:`JIT-compiled <JIT>` with Numba using
the :ref:`@jit <jit>` decorator."
lifted loops
loop-lifting
loop-jitting
A feature of compilation in :term:`object mode` where a loop can be
automatically extracted and compiled in :term:`nopython mode`. This
allows functions with operations unsupported in nopython mode to see
significant performance improvements if they contain loops with only
nopython-supported operations.
lowering
The act of translating :term:`Numba IR` into LLVM IR. The term
"lowering" stems from the fact that LLVM IR is low-level and
machine-specific while Numba IR is high-level and abstract.
NPM
nopython mode
A Numba compilation mode that generates code that does not access the
Python C API. This compilation mode produces the highest performance
code, but requires that the native types of all values in the function
can be :term:`inferred <type inference>`. Unless otherwise instructed,
the ``@jit`` decorator will automatically fall back to :term:`object
mode` if nopython mode cannot be used.
Numba IR
Numba intermediate representation
A representation of a piece of Python code which is more amenable
to analysis and transformations than the original Python
:term:`bytecode`.
object mode
A Numba compilation mode that generates code that handles all values
as Python objects and uses the Python C API to perform all operations
on those objects. Code compiled in object mode will often run
no faster than Python interpreted code, unless the Numba compiler can
take advantage of :term:`loop-jitting`.
``OptionalType``
An ``OptionalType`` is effectively a type union of a ``type`` and ``None``.
They typically occur in practice due to a variable being set to ``None``
and then in a branch the variable being set to some other value. It's
often not possible at compile time to determine if the branch will execute
so to permit :term:`type inference` to complete, the type of the variable
becomes the union of a ``type`` (from the value) and ``None``,
i.e. ``OptionalType(type)``.
type inference
The process by which Numba determines the specialized types of all
values within a function being compiled. Type inference can fail
if arguments or globals have Python types unknown to Numba, or if
functions are used that are not recognized by Numba. Successful
type inference is a prerequisite for compilation in
:term:`nopython mode`.
typing
The act of running :term:`type inference` on a value or operation.
ufunc
A NumPy `universal function <http://docs.scipy.org/doc/numpy/reference/ufuncs.html>`_.
Numba can create new compiled ufuncs with
the :ref:`@vectorize <vectorize>` decorator.
reflection
In numba, when a mutable container is passed as argument to a nopython
function from the Python interpreter, the container object and all its
contained elements are converted into nopython values. To match the
semantics of Python, any mutation on the container inside the nopython
function must be visible in the Python interpreter. To do so, Numba
must update the container and its elements and convert them back into
Python objects during the transition back into the interpreter.
Not to be confused with Python's "reflection" in the context of binary
operators (see https://docs.python.org/3.5/reference/datamodel.html).
.. Numba documentation master file, created by
sphinx-quickstart on Tue Dec 30 11:55:40 2014.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Numba documentation
===================
This is the Numba documentation. Unless you are already acquainted
with Numba, we suggest you start with the :doc:`User manual <user/index>`.
.. toctree::
:caption: For all users
:maxdepth: 2
user/index.rst
reference/index.rst
.. toctree::
:caption: For CUDA users
:maxdepth: 2
cuda/index.rst
cuda-reference/index.rst
.. toctree::
:caption: For advanced users & developers
:maxdepth: 2
extending/index.rst
developer/index.rst
proposals/index.rst
glossary.rst
release-notes-overview.rst
============================
NBEP 4: Defining C callbacks
============================
:Author: Antoine Pitrou
:Date: April 2016
:Status: Draft
Interfacing with some native libraries (for example written in C
or C++) can necessitate writing native callbacks to provide business logic
to the library. Some Python-facing libraries may also provide the
alternative of passing a ctypes-wrapped native callback instead of a
Python callback for better performance. A simple example is the
``scipy.integrate`` package where the user passes the function to be
integrated as a callback.
Users of those libraries may want to benefit from the performance advantage
of running purely native code, while writing their code in Python.
This proposal outlines a scheme to provide such a functionality in
Numba.
Basic usage
===========
We propose adding a new decorator, ``@cfunc``, importable from the main
package. This decorator allows defining a callback as in the following
example::
from numba import cfunc
from numba.types import float64
# A callback with the C signature `double(double)`
@cfunc(float64(float64), nopython=True)
def integrand(x):
return 1 / x
The ``@cfunc`` decorator returns a "C function" object holding the
resources necessary to run the given compiled function (for example its
LLVM module). This object has several attributes and methods:
* the ``ctypes`` attribute is a ctypes function object representing
the native function.
* the ``address`` attribute is the address of the native function code, as
an integer (note this can also be computed from the ``ctypes`` attribute).
* the ``native_name`` attribute is the symbol under which the function
can be looked up inside the current process.
* the ``inspect_llvm()`` method returns the IR for the LLVM module
in which the function is compiled. It is expected that the ``native_name``
attribute corresponds to the function's name in the LLVM IR.
The general signature of the decorator is ``cfunc(signature, **options)``.
The ``signature`` must specify the argument types and return type of the
function using Numba types. In contrary to ``@jit``, the return type cannot
be omitted.
The ``options`` are keyword-only parameters specifying compilation options.
We are expecting that the standard ``@jit`` options (``nopython``,
``forceobj``, ``cache``) can be made to work with ``@cfunc``.
Calling from Numba-compiled functions
-------------------------------------
While the intended use is to pass a callback's address to foreign C
code expecting a function pointer, it should be made possible to call
the C callback from a Numba-compiled function.
Passing array data
==================
Native platform ABIs as used by C or C++ don't have the notion of a shaped
array as in Numpy. One common solution is to pass a raw data pointer and
one or several size arguments (depending on dimensionality). Numba must
provide a way to rebuild an array view of this data inside the callback.
::
from numba import cfunc, carray
from numba.types import float64, CPointer, void, intp
# A callback with the C signature `void(double *, double *, size_t)`
@cfunc(void(CPointer(float64), CPointer(float64), intp))
def invert(in_ptr, out_ptr, n):
in_ = carray(in_ptr, (n,))
out = carray(out_ptr, (n,))
for i in range(n):
out[i] = 1 / in_[i]
The ``carray`` function takes ``(pointer, shape, dtype)`` arguments
(``dtype`` being optional) and returns a C-layout array view over the
data *pointer*, with the given *shape* and *dtype*. *pointer* must
be a ctypes pointer object (not a Python integer). The array's
dimensionality corresponds to the *shape* tuple's length. If *dtype*
is not given, the array's dtype corresponds to the *pointer*'s pointee
type.
The ``farray`` function is similar except that it returns a F-layout
array view.
Error handling
==============
There is no standard mechanism in C for error reporting. Unfortunately,
Numba currently doesn't handle ``try..except`` blocks, which makes it more
difficult for the user to implement the required error reporting scheme.
The current stance of this proposal is to let users guard against invalid
arguments where necessary, and do whatever is required to inform the caller
of the error.
Based on user feedback, we can later add support for some error reporting
schemes, such as returning an integer error code depending on whether an
exception was raised, or setting ``errno``.
Deferred topics
===============
Ahead-of-Time compilation
-------------------------
This proposal doesn't make any provision for AOT compilation of C callbacks.
It would probably necessitate a separate API (a new method on the
``numba.pycc.CC`` object), and the implementation would require exposing
a subset of the C function object's functionality from the compiled C
extension module.
Opaque data pointers
--------------------
Some libraries allow passing an opaque data pointer (``void *``) to a
user-provided callback, to provide any required context for execution
of the callback. Taking advantage of this functionality would require
adding specific support in Numba, for example the ability to do generic
conversion from ``types.voidptr`` and to take the address of a
Python-facing ``jitclass`` instance.
========================
NBEP 2: Extension points
========================
:Author: Antoine Pitrou
:Date: July 2015
:Status: Draft
Implementing new types or functions in Numba requires hooking into
various mechanisms along the compilation chain (and potentially
outside of it). This document aims, first, at examining the
current ways of doing so and, second, at making proposals to make
extending easier.
If some of the proposals are implemented, we should first strive
to use and exercise them internally, before exposing the APIs to the
public.
.. note::
This document doesn't cover CUDA or any other non-CPU backend.
High-level API
==============
There is currently no high-level API, making some use cases more
complicated than they should be.
Proposed changes
----------------
Dedicated module
''''''''''''''''
We propose the addition of a ``numba.extending`` module exposing the main
APIs useful for extending Numba.
Implementing a function
'''''''''''''''''''''''
We propose the addition of a ``@overload`` decorator allowing the
implementation of a given function for use in :term:`nopython mode`.
The overloading function has the same formal signature as the implemented
function, and receives the actual argument types. It should return a
Python function implementing the overloaded function for the given types.
The following example implements :func:`numpy.where` with
this approach.
.. literalinclude:: np-where-override.py
It is also possible to implement functions already known to Numba, to
support additional types. The following example implements the
built-in function :func:`len` for tuples with this approach::
@overload(len)
def tuple_len(x):
if isinstance(x, types.BaseTuple):
# The tuple length is known at compile-time, so simply reify it
# as a constant.
n = len(x)
def len_impl(x):
return n
return len_impl
Implementing an attribute
'''''''''''''''''''''''''
We propose the addition of a ``@overload_attribute`` decorator allowing
the implementation of an attribute getter for use in :term:`nopython mode`.
The following example implements the ``.nbytes`` attribute on Numpy arrays::
@overload_attribute(types.Array, 'nbytes')
def array_nbytes(arr):
def get(arr):
return arr.size * arr.itemsize
return get
.. note::
The overload_attribute() signature allows for expansion to also define
setters and deleters, by letting the decorated function return a
``getter, setter, deleter`` tuple instead of a single ``getter``.
Implementing a method
'''''''''''''''''''''
We propose the addition of a ``@overload_method`` decorator allowing the
implementation of an instance method for use in :term:`nopython mode`.
The following example implements the ``.take()`` method on Numpy arrays::
@overload_method(types.Array, 'take')
def array_take(arr, indices):
if isinstance(indices, types.Array):
def take_impl(arr, indices):
n = indices.shape[0]
res = np.empty(n, arr.dtype)
for i in range(n):
res[i] = arr[indices[i]]
return res
return take_impl
Exposing a structure member
'''''''''''''''''''''''''''
We propose the addition of a ``make_attribute_wrapper()`` function exposing
an internal field as a visible read-only attribute, for those types backed
by a ``StructModel`` data model.
For example, assuming ``PdIndexType`` is the Numba type of pandas indices,
here is how to expose the underlying Numpy array as a ``._data`` attribute::
@register_model(PdIndexType)
class PdIndexModel(models.StructModel):
def __init__(self, dmm, fe_type):
members = [
('values', fe_type.as_array),
]
models.StructModel.__init__(self, dmm, fe_type, members)
make_attribute_wrapper(PdIndexType, 'values', '_data')
Typing
======
Numba types
-----------
Numba's standard types are declared in :mod:`numba.types`. To declare
a new type, one subclasses the base :class:`Type` class or one of its
existing abstract subclasses, and implements the required functionality.
Proposed changes
''''''''''''''''
No change required.
Type inference on values
------------------------
Values of a new type need to be type-inferred if they can appear as
function arguments or constants. The core machinery is in
:mod:`numba.typing.typeof`.
In the common case where some Python class or classes map exclusively
to the new type, one can extend a generic function to dispatch on said
classes, e.g.::
from numba.typing.typeof import typeof_impl
@typeof_impl(MyClass)
def _typeof_myclass(val, c):
if "some condition":
return MyType(...)
The ``typeof_impl`` specialization must return a Numba type instance,
or None if the value failed typing.
(when one controls the class being type-inferred, an alternative
to ``typeof_impl`` is to define a ``_numba_type_`` property on the class)
In the rarer case where the new type can denote various Python classes
that are impossible to enumerate, one must insert a manual check in the
fallback implementation of the ``typeof_impl`` generic function.
Proposed changes
''''''''''''''''
Allow people to define a generic hook without monkeypatching the
fallback implementation.
Fast path for type inference on function arguments
--------------------------------------------------
Optionally, one may want to allow a new type to participate in the
fast type resolution (written in C code) to minimize function call
overhead when a JIT-compiled function is called with the new type.
One must then insert the required checks and implementation in
the ``_typeof.c`` file, presumably inside the ``compute_fingerprint()``
function.
Proposed changes
''''''''''''''''
None. Adding generic hooks to C code embedded in a C Python extension
is too delicate a change.
Type inference on operations
----------------------------
Values resulting from various operations (function calls, operators, etc.)
are typed using a set of helpers called "templates". One can define a
new template by subclass one of the existing base classes and implement
the desired inference mechanism. The template is explicitly registered
with the type inference machinery using a decorator.
The :class:`ConcreteTemplate` base class allows one to define inference as
a set of supported signatures for a given operation. The following example
types the modulo operator::
@builtin
class BinOpMod(ConcreteTemplate):
key = "%"
cases = [signature(op, op, op)
for op in sorted(types.signed_domain)]
cases += [signature(op, op, op)
for op in sorted(types.unsigned_domain)]
cases += [signature(op, op, op) for op in sorted(types.real_domain)]
(note that type *instances* are used in the signatures, severely
limiting the amount of genericity that can be expressed)
The :class:`AbstractTemplate` base class allows to define inference
programmatically, giving it full flexibility. Here is a simplistic
example of how tuple indexing (i.e. the ``__getitem__`` operator) can
be expressed::
@builtin
class GetItemUniTuple(AbstractTemplate):
key = "getitem"
def generic(self, args, kws):
tup, idx = args
if isinstance(tup, types.UniTuple) and isinstance(idx, types.Integer):
return signature(tup.dtype, tup, idx)
The :class:`AttributeTemplate` base class allows to type the attributes
and methods of a given type. Here is an example, typing the ``.real``
and ``.imag`` attributes of complex numbers::
@builtin_attr
class ComplexAttribute(AttributeTemplate):
key = types.Complex
def resolve_real(self, ty):
return ty.underlying_float
def resolve_imag(self, ty):
return ty.underlying_float
.. note::
:class:`AttributeTemplate` only works for getting attributes. Setting
an attribute's value is hardcoded in :mod:`numba.typeinfer`.
The :class:`CallableTemplate` base class offers an easier way to parse
flexible function signatures, by letting one define a callable that has
the same definition as the function being typed. For example, here is how
one could hypothetically type Python's ``sorted`` function if Numba supported
lists::
@builtin
class Sorted(CallableTemplate):
key = sorted
def generic(self):
def typer(iterable, key=None, reverse=None):
if reverse is not None and not isinstance(reverse, types.Boolean):
return
if key is not None and not isinstance(key, types.Callable):
return
if not isinstance(iterable, types.Iterable):
return
return types.List(iterable.iterator_type.yield_type)
return typer
(note you can return just the function's return type instead of the
full signature)
Proposed changes
''''''''''''''''
Naming of the various decorators is quite vague and confusing. We propose
renaming ``@builtin`` to ``@infer``, ``@builtin_attr`` to ``@infer_getattr``
and ``builtin_global`` to ``infer_global``.
The two-step declaration for global values is a bit verbose, we propose
simplifying it by allowing the use of ``infer_global`` as a decorator::
@infer_global(len)
class Len(AbstractTemplate):
key = len
def generic(self, args, kws):
assert not kws
(val,) = args
if isinstance(val, (types.Buffer, types.BaseTuple)):
return signature(types.intp, val)
The class-based API can feel clumsy, we can add a functional API for
some of the template kinds:
.. code-block:: python
@type_callable(sorted)
def type_sorted(context):
def typer(iterable, key=None, reverse=None):
# [same function as above]
return typer
Code generation
===============
Concrete representation of values of a Numba type
-------------------------------------------------
Any concrete Numba type must be able to be represented in LLVM form
(for variable storage, argument passing, etc.). One defines that
representation by implementing a datamodel class and registering it
with a decorator. Datamodel classes for standard types are defined
in :mod:`numba.datamodel.models`.
Proposed changes
''''''''''''''''
No change required.
Conversion between types
------------------------
Implicit conversion between Numba types is currently implemented as a
monolithic sequence of choices and type checks in the
:meth:`BaseContext.cast` method. To add a new implicit conversion, one
appends a type-specific check in that method.
Boolean evaluation is a special case of implicit conversion (the
destination type being :class:`types.Boolean`).
.. note::
Explicit conversion is seen as a regular operation, e.g. a constructor
call.
Proposed changes
''''''''''''''''
Add a generic function for implicit conversion, with multiple dispatch
based on the source and destination types. Here is an example showing
how to write a float-to-integer conversion::
@lower_cast(types.Float, types.Integer)
def float_to_integer(context, builder, fromty, toty, val):
lty = context.get_value_type(toty)
if toty.signed:
return builder.fptosi(val, lty)
else:
return builder.fptoui(val, lty)
Implementation of an operation
------------------------------
Other operations are implemented and registered using a set of generic
functions and decorators. For example, here is how lookup for a the ``.ndim``
attribute on Numpy arrays is implemented::
@builtin_attr
@impl_attribute(types.Kind(types.Array), "ndim", types.intp)
def array_ndim(context, builder, typ, value):
return context.get_constant(types.intp, typ.ndim)
And here is how calling ``len()`` on a tuple value is implemented::
@builtin
@implement(types.len_type, types.Kind(types.BaseTuple))
def tuple_len(context, builder, sig, args):
tupty, = sig.args
retty = sig.return_type
return context.get_constant(retty, len(tupty.types))
Proposed changes
''''''''''''''''
Review and streamine the API. Drop the requirement to write
``types.Kind(...)`` explicitly. Remove the separate ``@implement``
decorator and rename ``@builtin`` to ``@lower_builtin``, ``@builtin_attr``
to ``@lower_getattr``, etc.
Add decorators to implement ``setattr()`` operations, named
``@lower_setattr`` and ``@lower_setattr_generic``.
Conversion from / to Python objects
-----------------------------------
Some types need to be converted from or to Python objects, if they can
be passed as function arguments or returned from a function. The
corresponding boxing and unboxing operations are implemented using
a generic function. The implementations for standard Numba types
are in :mod:`numba.targets.boxing`. For example, here is the boxing
implementation for a boolean value::
@box(types.Boolean)
def box_bool(c, typ, val):
longval = c.builder.zext(val, c.pyapi.long)
return c.pyapi.bool_from_long(longval)
Proposed changes
''''''''''''''''
Change the implementation signature from ``(c, typ, val)`` to
``(typ, val, c)``, to match the one chosen for the ``typeof_impl``
generic function.
.. _nbep-7:
===============================================
NBEP 7: CUDA External Memory Management Plugins
===============================================
:Author: Graham Markall, NVIDIA
:Contributors: Thomson Comer, Peter Entschev, Leo Fang, John Kirkham, Keith Kraus
:Date: March 2020
:Status: Final
Background and goals
--------------------
The :ref:`CUDA Array Interface <cuda-array-interface>` enables sharing of data
between different Python libraries that access CUDA devices. However, each
library manages its own memory distinctly from the others. For example:
* `Numba <https://numba.pydata.org/>`_ internally manages memory for the creation
of device and mapped host arrays.
* `The RAPIDS libraries <https://rapids.ai/>`_ (cuDF, cuML, etc.) use the `Rapids
Memory Manager <https://github.com/rapidsai/rmm>`_ for allocating device
memory.
* `CuPy <https://cupy.chainer.org/>`_ includes a `memory pool
implementation <https://docs-cupy.chainer.org/en/stable/reference/memory.html>`_
for both device and pinned memory.
The goal of this NBEP is to describe a plugin interface that enables Numba's
internal memory management to be replaced with an external memory manager by the
user. When the plugin interface is in use, Numba no longer directly allocates or
frees any memory when creating arrays, but instead requests allocations and
frees through the external manager.
Requirements
------------
Provide an *External Memory Manager (EMM)* interface in Numba.
* When the EMM is in use, Numba will make all memory allocation using the EMM.
It will never directly call functions such as ``CuMemAlloc``\ , ``cuMemFree``\ , etc.
* When not using an *External Memory Manager (EMM)*\ , Numba's present behaviour
is unchanged (at the time of writing, the current version is the 0.48
release).
If an EMM is to be used, it will entirely replace Numba's internal memory
management for the duration of program execution. An interface for setting the
memory manager will be provided.
Device vs. Host memory
^^^^^^^^^^^^^^^^^^^^^^^
An EMM will always take responsibility for the management of device memory.
However, not all CUDA memory management libraries also support managing host
memory, so a facility for Numba to continue the management of host memory
whilst ceding control of device memory to the EMM will be provided.
Deallocation strategies
^^^^^^^^^^^^^^^^^^^^^^^
Numba's internal memory management uses a :ref:`deallocation strategy
<deallocation-behavior>` designed to increase efficiency by deferring
deallocations until a significant quantity are pending. It also provides a
mechanism for preventing deallocations entirely during critical sections, using
the :func:`~numba.cuda.defer_cleanup` context manager.
* When the EMM is not in use, the deallocation strategy and operation of
``defer_cleanup`` remain unchanged.
* When the EMM is in use, the deallocation strategy is implemented by the EMM,
and Numba's internal deallocation mechanism is not used. For example:
* A similar strategy to Numba's could be implemented by the EMM, or
* Deallocated memory might immediately be returned to a memory pool.
* The ``defer_cleanup`` context manager may behave differently with an EMM - an
EMM should be accompanied by documentation of the behaviour of the
``defer_cleanup`` context manager when it is in use.
* For example, a pool allocator could always immediately return memory to a
pool even when the context manager is in use, but could choose
not to free empty pools until ``defer_cleanup`` is not in use.
Management of other objects
^^^^^^^^^^^^^^^^^^^^^^^^^^^
In addition to memory, Numba manages the allocation and deallocation of
:ref:`events <events>`, :ref:`streams <streams>`, and modules (a module is a
compiled object, which is generated from ``@cuda.jit``\ -ted functions). The
management of streams, events, and modules should be unchanged by the presence
or absence of an EMM.
Asynchronous allocation / deallocation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
An asynchronous memory manager might provide the facility for an allocation or
free to take a CUDA stream and execute asynchronously. For freeing, this is
unlikely to cause issues since it operates at a layer beneath Python, but for
allocations this could be problematic if the user tries to then launch a kernel
on the default stream from this asynchronous memory allocation.
The interface described in this proposal will not be required to support
asynchronous allocation and deallocation, and as such these use cases will not
be considered further. However, nothing in this proposal should preclude the
straightforward addition of asynchronous operations in future versions of the
interface.
Non-requirements
^^^^^^^^^^^^^^^^
In order to minimise complexity and constrain this proposal to a reasonable
scope, the following will not be supported:
* Using different memory manager implementations for different contexts. All
contexts will use the same memory manager implementation - either the Numba
internal implementation or an external implementation.
* Changing the memory manager once execution has begun. It is not practical to
change the memory manager and retain all allocations. Cleaning up the entire
state and then changing to a different memory allocator (rather than starting
a new process) appears to be a rather niche use case.
* Any changes to the ``__cuda_array_interface__`` to further define its semantics,
e.g. for acquiring / releasing memory as discussed in `Numba Issue
#4886 <https://github.com/numba/numba/issues/4886>`_ - these are independent,
and can be addressed as part of separate proposals.
* Managed memory / UVM is not supported. At present Numba does not support UVM -
see `Numba Issue #4362 <https://github.com/numba/numba/issues/4362>`_ for
discussion of support.
Interface for Plugin developers
-------------------------------
New classes and functions will be added to ``numba.cuda.cudadrv.driver``:
* ``BaseCUDAMemoryManager`` and ``HostOnlyCUDAMemoryManager``\ : base classes for
EMM plugin implementations.
* ``set_memory_manager``: a method for registering an external memory manager with
Numba.
These will be exposed through the public API, in the ``numba.cuda`` module.
Additionally, some classes that are already part of the `driver` module will be
exposed as part of the public API:
* ``MemoryPointer``: used to encapsulate information about a pointer to device
memory.
* ``MappedMemory``: used to hold information about host memory that is mapped into
the device address space (a subclass of ``MemoryPointer``\ ).
* ``PinnedMemory``: used to hold information about host memory that is pinned (a
subclass of ``mviewbuf.MemAlloc``\ , a class internal to Numba).
As an alternative to calling the ``set_memory_manager`` function, an environment
variable can be used to set the memory manager. The value of the environment
variable should be the name of the module containing the memory manager in its
global scope, named ``_numba_memory_manager``\ :
.. code-block::
export NUMBA_CUDA_MEMORY_MANAGER="<module>"
When this variable is set, Numba will automatically use the memory manager from
the specified module. Calls to ``set_memory_manager`` will issue a warning, but
otherwise be ignored.
Plugin Base Classes
^^^^^^^^^^^^^^^^^^^
An EMM plugin is implemented by inheriting from the ``BaseCUDAMemoryManager``
class, which is defined as:
.. code-block:: python
class BaseCUDAMemoryManager(object, metaclass=ABCMeta):
@abstractmethod
def memalloc(self, size):
"""
Allocate on-device memory in the current context. Arguments:
- `size`: Size of allocation in bytes
Returns: a `MemoryPointer` to the allocated memory.
"""
@abstractmethod
def memhostalloc(self, size, mapped, portable, wc):
"""
Allocate pinned host memory. Arguments:
- `size`: Size of the allocation in bytes
- `mapped`: Whether the allocated memory should be mapped into the CUDA
address space.
- `portable`: Whether the memory will be considered pinned by all
contexts, and not just the calling context.
- `wc`: Whether to allocate the memory as write-combined.
Returns a `MappedMemory` or `PinnedMemory` instance that owns the
allocated memory, depending on whether the region was mapped into
device memory.
"""
@abstractmethod
def mempin(self, owner, pointer, size, mapped):
"""
Pin a region of host memory that is already allocated. Arguments:
- `owner`: An object owning the memory - e.g. a `DeviceNDArray`.
- `pointer`: The pointer to the beginning of the region to pin.
- `size`: The size of the region to pin.
- `mapped`: Whether the region should also be mapped into device memory.
Returns a `MappedMemory` or `PinnedMemory` instance that refers to the
allocated memory, depending on whether the region was mapped into device
memory.
"""
@abstractmethod
def initialize(self):
"""
Perform any initialization required for the EMM plugin to be ready to
use.
"""
@abstractmethod
def get_memory_info(self):
"""
Returns (free, total) memory in bytes in the context
"""
@abstractmethod
def get_ipc_handle(self, memory):
"""
Return an `IpcHandle` from a GPU allocation. Arguments:
- `memory`: A `MemoryPointer` for which the IPC handle should be created.
"""
@abstractmethod
def reset(self):
"""
Clear up all memory allocated in this context.
"""
@abstractmethod
def defer_cleanup(self):
"""
Returns a context manager that ensures the implementation of deferred
cleanup whilst it is active.
"""
@property
@abstractmethod
def interface_version(self):
"""
Returns an integer specifying the version of the EMM Plugin interface
supported by the plugin implementation. Should always return 1 for
implementations described in this proposal.
"""
All of the methods of an EMM plugin are called from within Numba - they never
need to be invoked directly by a Numba user.
The ``initialize`` method is called by Numba prior to any memory allocations
being requested. This gives the EMM an opportunity to initialize any data
structures, etc., that it needs for its normal operations. The method may be
called multiple times during the lifetime of the program - subsequent calls
should not invalidate or reset the state of the EMM.
The ``memalloc``\ , ``memhostalloc``\ , and ``mempin`` methods are called when Numba
requires an allocation of device or host memory, or pinning of host memory.
Device memory should always be allocated in the current context.
``get_ipc_handle`` is called when an IPC handle for an array is required. Note
that there is no method for closing an IPC handle - this is because the
``IpcHandle`` object constructed by ``get_ipc_handle`` contains a ``close()`` method
as part of its definition in Numba, which closes the handle by calling
``cuIpcCloseMemHandle``. It is expected that this is sufficient for general use
cases, so no facility for customising the closing of IPC handles is provided by
the EMM Plugin interface.
``get_memory_info`` may be called at any time after ``initialize``.
``reset`` is called as part of resetting a context. Numba does not normally call
reset spontaneously, but it may be called at the behest of the user. Calls to
``reset`` may even occur before ``initialize`` is called, so the plugin should be
robust against this occurrence.
``defer_cleanup`` is called when the ``numba.cuda.defer_cleanup`` context manager
is used from user code.
``interface_version`` is called by Numba when the memory manager is set, to
ensure that the version of the interface implemented by the plugin is
compatible with the version of Numba in use.
Representing pointers
^^^^^^^^^^^^^^^^^^^^^
Device Memory
~~~~~~~~~~~~~
The ``MemoryPointer`` class is used to represent a pointer to memory. Whilst there
are various details of its implementation, the only aspect relevant to EMM
plugin development is its initialization. The ``__init__`` method has the
following interface:
.. code-block:: python
class MemoryPointer:
def __init__(self, context, pointer, size, owner=None, finalizer=None):
* ``context``\ : The context in which the pointer was allocated.
* ``pointer``\ : A ``ctypes`` pointer (e.g. ``ctypes.c_uint64``\ ) holding the address of
the memory.
* ``size``\ : The size of the allocation in bytes.
* ``owner``\ : The owner is sometimes set by the internals of the class, or used for
Numba's internal memory management, but need not be provided by the writer of
an EMM plugin - the default of ``None`` should always suffice.
* ``finalizer``\ : A method that is called when the last reference to the
``MemoryPointer`` object is released. Usually this will make a call to the
external memory management library to inform it that the memory is no longer
required, and that it could potentially be freed (though the EMM is not
required to free it immediately).
Host Memory
~~~~~~~~~~~
Memory mapped into the CUDA address space (which is created when the
``memhostalloc`` or ``mempin`` methods are called with ``mapped=True``\ ) is managed
using the ``MappedMemory`` class:
.. code-block:: python
class MappedMemory(AutoFreePointer):
def __init__(self, context, pointer, size, owner, finalizer=None):
* ``context``\ : The context in which the pointer was allocated.
* ``pointer``\ : A ``ctypes`` pointer (e.g. ``ctypes.c_void_p``\ ) holding the address of
the allocated memory.
* ``size``\ : The size of the allocated memory in bytes.
* ``owner``\ : A Python object that owns the memory, e.g. a ``DeviceNDArray``
instance.
* ``finalizer``\ : A method that is called when the last reference to the
``MappedMemory`` object is released. For example, this method could call
``cuMemFreeHost`` on the pointer to deallocate the memory immediately.
Note that the inheritance from ``AutoFreePointer`` is an implementation detail and
need not concern the developer of an EMM plugin - ``MemoryPointer`` is higher in
the MRO of ``MappedMemory``.
Memory that is only in the host address space and has been pinned is represented
with the ``PinnedMemory`` class:
.. code-block:: python
class PinnedMemory(mviewbuf.MemAlloc):
def __init__(self, context, pointer, size, owner, finalizer=None):
* ``context``\ : The context in which the pointer was allocated.
* ``pointer``\ : A ``ctypes`` pointer (e.g. ``ctypes.c_void_p``\ ) holding the address of
the pinned memory.
* ``size``\ : The size of the pinned region in bytes.
* ``owner``\ : A Python object that owns the memory, e.g. a ``DeviceNDArray``
instance.
* ``finalizer``\ : A method that is called when the last reference to the
``PinnedMemory`` object is released. This method could e.g. call
``cuMemHostUnregister`` on the pointer to unpin the memory immediately.
Providing device memory management only
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Some external memory managers will support management of on-device memory but
not host memory. To make it easy to implement an EMM plugin using one of these
managers, Numba will provide a memory manager class with implementations of the
``memhostalloc`` and ``mempin`` methods. An abridged definition of this class
follows:
.. code-block:: python
class HostOnlyCUDAMemoryManager(BaseCUDAMemoryManager):
# Unimplemented methods:
#
# - memalloc
# - get_memory_info
def memhostalloc(self, size, mapped, portable, wc):
# Implemented.
def mempin(self, owner, pointer, size, mapped):
# Implemented.
def initialize(self):
# Implemented.
#
# Must be called by any subclass when its initialize() method is
# called.
def reset(self):
# Implemented.
#
# Must be called by any subclass when its reset() method is
# called.
def defer_cleanup(self):
# Implemented.
#
# Must be called by any subclass when its defer_cleanup() method is
# called.
A class can subclass the ``HostOnlyCUDAMemoryManager`` and then it only needs to
add implementations of methods for on-device memory. Any subclass must observe
the following rules:
* If the subclass implements ``__init__``\ , then it must also call
``HostOnlyCUDAMemoryManager.__init__``\ , as this is used to initialize some of
its data structures (\ ``self.allocations`` and ``self.deallocations``\ ).
* The subclass must implement ``memalloc`` and ``get_memory_info``.
* The ``initialize`` and ``reset`` methods perform initialisation of structures
used by the ``HostOnlyCUDAMemoryManager``.
* If the subclass has nothing to do on initialisation (possibly) or reset
(unlikely) then it need not implement these methods.
* However, if it does implement these methods then it must also call the
methods from ``HostOnlyCUDAMemoryManager`` in its own implementations.
* Similarly if ``defer_cleanup`` is implemented, it should enter the context
provided by ``HostOnlyCUDAManager.defer_cleanup()`` prior to ``yield``\ ing (or in
the ``__enter__`` method) and release it prior to exiting (or in the ``__exit__``
method).
Import order
^^^^^^^^^^^^
The order in which Numba and the library implementing an EMM Plugin should not
matter. For example, if ``rmm`` were to implement and register an EMM Plugin,
then:
.. code-block:: python
from numba import cuda
import rmm
and
.. code-block:: python
import rmm
from numba import cuda
are equivalent - this is because Numba does not initialize CUDA or allocate any
memory until the first call to a CUDA function - neither instantiating and
registering an EMM plugin, nor importing ``numba.cuda`` causes a call to a CUDA
function.
Numba as a Dependency
^^^^^^^^^^^^^^^^^^^^^
Adding the implementation of an EMM Plugin to a library naturally makes Numba a
dependency of the library where it may not have been previously. In order to
make the dependency optional, if this is desired, one might conditionally
instantiate and register the EMM Plugin like:
.. code-block:: python
try:
import numba
from mylib.numba_utils import MyNumbaMemoryManager
numba.cuda.cudadrv.driver.set_memory_manager(MyNumbaMemoryManager)
except:
print("Numba not importable - not registering EMM Plugin")
so that ``mylib.numba_utils``\ , which contains the implementation of the EMM
Plugin, is only imported if Numba is already present. If Numba is not available,
then ``mylib.numba_utils`` (which necessarily imports ``numba``\ ), will never be
imported.
It is recommended that any library with an EMM Plugin includes at least some
environments with Numba for testing with the EMM Plugin in use, as well as some
environments without Numba, to avoid introducing an accidental Numba dependency.
Example implementation - A RAPIDS Memory Manager (RMM) Plugin
-------------------------------------------------------------
An implementation of an EMM plugin within the `Rapids Memory Manager
(RMM) <https://github.com/rapidsai/rmm>`_ is sketched out in this section. This is
intended to show an overview of the implementation in order to support the
descriptions above and to illustrate how the plugin interface can be used -
different choices may be made for a production-ready implementation.
The plugin implementation consists of additions to `python/rmm/rmm.py
<https://github.com/rapidsai/rmm/blob/d5831ac5ebb5408ee83f63b7c7d03d8870ecb361/python/rmm/rmm.py>`_:
.. code-block:: python
# New imports:
from contextlib import context_manager
# RMM already has Numba as a dependency, so these imports need not be guarded
# by a check for the presence of numba.
from numba.cuda import (HostOnlyCUDAMemoryManager, MemoryPointer, IpcHandle,
set_memory_manager)
# New class implementing the EMM Plugin:
class RMMNumbaManager(HostOnlyCUDAMemoryManager):
def memalloc(self, size):
# Allocates device memory using RMM functions. The finalizer for the
# allocated memory calls back to RMM to free the memory.
addr = librmm.rmm_alloc(bytesize, 0)
ctx = cuda.current_context()
ptr = ctypes.c_uint64(int(addr))
finalizer = _make_finalizer(addr, stream)
return MemoryPointer(ctx, ptr, size, finalizer=finalizer)
def get_ipc_handle(self, memory):
"""
Get an IPC handle for the memory with offset modified by the RMM memory
pool.
"""
# This implementation provides a functional implementation and illustrates
# what get_ipc_handle needs to do, but it is not a very "clean"
# implementation, and it relies on borrowing bits of Numba internals to
# initialise ipchandle.
#
# A more polished implementation might make use of additional functions in
# the RMM C++ layer for initialising IPC handles, and not use any Numba
# internals.
ipchandle = (ctypes.c_byte * 64)() # IPC handle is 64 bytes
cuda.cudadrv.memory.driver_funcs.cuIpcGetMemHandle(
ctypes.byref(ipchandle),
memory.owner.handle,
)
source_info = cuda.current_context().device.get_device_identity()
ptr = memory.device_ctypes_pointer.value
offset = librmm.rmm_getallocationoffset(ptr, 0)
return IpcHandle(memory, ipchandle, memory.size, source_info,
offset=offset)
def get_memory_info(self):
# Returns a tuple of (free, total) using RMM functionality.
return get_info() # Function defined in rmm.py
def initialize(self):
# Nothing required to initialize RMM here, but this method is added
# to illustrate that the super() method should also be called.
super().initialize()
@contextmanager
def defer_cleanup(self):
# Does nothing to defer cleanup - a full implementation may choose to
# implement a different policy.
with super().defer_cleanup():
yield
@property
def interface_version(self):
# As required by the specification
return 1
# The existing _make_finalizer function is used by RMMNumbaManager:
def _make_finalizer(handle, stream):
"""
Factory to make the finalizer function.
We need to bind *handle* and *stream* into the actual finalizer, which
takes no args.
"""
def finalizer():
"""
Invoked when the MemoryPointer is freed
"""
librmm.rmm_free(handle, stream)
return finalizer
# Utility function register `RMMNumbaManager` as an EMM:
def use_rmm_for_numba():
set_memory_manager(RMMNumbaManager)
# To support `NUMBA_CUDA_MEMORY_MANAGER=rmm`:
_numba_memory_manager = RMMNumbaManager
Example usage
^^^^^^^^^^^^^
A simple example that configures Numba to use RMM for memory management and
creates a device array is as follows:
.. code-block:: python
# example.py
import rmm
import numpy as np
from numba import cuda
rmm.use_rmm_for_numba()
a = np.zeros(10)
d_a = cuda.to_device(a)
del(d_a)
print(rmm.csv_log())
Running this should result in output similar to the following:
.. code-block::
Event Type,Device ID,Address,Stream,Size (bytes),Free Memory,Total Memory,Current Allocs,Start,End,Elapsed,Location
Alloc,0,0x7fae06600000,0,80,0,0,1,1.10549,1.1074,0.00191666,<path>/numba/numba/cuda/cudadrv/driver.py:683
Free,0,0x7fae06600000,0,0,0,0,0,1.10798,1.10921,0.00122238,<path>/numba/numba/utils.py:678
Note that there is some scope for improvement in RMM for detecting the line
number at which the allocation / free occurred, but this is outside the scope of
the example in this proposal.
Setting the memory manager through the environment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Rather than calling ``rmm.use_rmm_for_numba()`` in the example above, the memory
manager could also be set to use RMM globally with an environment variable, so
the Python interpreter is invoked to run the example as:
.. code-block::
NUMBA_CUDA_MEMORY_MANAGER="rmm.RMMNumbaManager" python example.py
Numba internal changes
----------------------
This section is intended primarily for Numba developers - those with an interest
in the external interface for implementing EMM plugins may choose to skip over
this section.
Current model / implementation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
At present, memory management is implemented in the
:class:`~numba.cuda.cudadrv.driver.Context` class. It maintains lists of
allocations and deallocations:
* ``allocations`` is a ``numba.core.utils.UniqueDict``, created at context
creation time.
* ``deallocations`` is an instance of the ``_PendingDeallocs`` class, and is created
when ``Context.prepare_for_use()`` is called.
These are used to track allocations and deallocations of:
* Device memory
* Pinned memory
* Mapped memory
* Streams
* Events
* Modules
The ``_PendingDeallocs`` class implements the deferred deallocation strategy -
cleanup functions (such as ``cuMemFree``\ ) for the items above are added to its
list of pending deallocations by the finalizers of objects representing
allocations. These finalizers are run when the objects owning them are
garbage-collected by the Python interpreter. When the addition of a new
cleanup function to the deallocation list causes the number or size of pending
deallocations to exceed a configured ratio, the ``_PendingDeallocs`` object runs
deallocators for all items it knows about and then clears its internal pending
list.
See :ref:`deallocation-behavior` for more details of this implementation.
Proposed changes
^^^^^^^^^^^^^^^^
This section outlines the major changes that will be made to support the EMM
plugin interface - there will be various small changes to other parts of Numba
that will be required in order to adapt to these changes; an exhaustive list of
these is not provided.
Context changes
~~~~~~~~~~~~~~~
The ``numba.cuda.cudadrv.driver.Context`` class will no longer directly allocate
and free memory. Instead, the context will hold a reference to a memory manager
instance, and its memory allocation methods will call into the memory manager,
e.g.:
.. code-block:: python
def memalloc(self, size):
return self.memory_manager.memalloc(size)
def memhostalloc(self, size, mapped=False, portable=False, wc=False):
return self.memory_manager.memhostalloc(size, mapped, portable, wc)
def mempin(self, owner, pointer, size, mapped=False):
if mapped and not self.device.CAN_MAP_HOST_MEMORY:
raise CudaDriverError("%s cannot map host memory" % self.device)
return self.memory_manager.mempin(owner, pointer, size, mapped)
def prepare_for_use(self):
self.memory_manager.initialize()
def get_memory_info(self):
self.memory_manager.get_memory_info()
def get_ipc_handle(self, memory):
return self.memory_manager.get_ipc_handle(memory)
def reset(self):
# ... Already-extant reset logic, plus:
self._memory_manager.reset()
The ``memory_manager`` member is initialised when the context is created.
The ``memunpin`` method (not shown above but currently exists in the ``Context``
class) has never been implemented - it presently raises a ``NotImplementedError``.
This method arguably un-needed - pinned memory is immediately unpinned by its
finalizer, and unpinning before a finalizer runs would invalidate the state of
``PinnedMemory`` objects for which references are still held. It is proposed that
this is removed when making the other changes to the ``Context`` class.
The ``Context`` class will still instantiate ``self.allocations`` and
``self.deallocations`` as before - these will still be used by the context to
manage the allocations and deallocations of events, streams, and modules, which
are not handled by the EMM plugin.
New components of the ``driver`` module
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* ``BaseCUDAMemoryManager``\ : An abstract class, as defined in the plugin interface
above.
* ``HostOnlyCUDAMemoryManager``\ : A subclass of ``BaseCUDAMemoryManager``\ , with the
logic from ``Context.memhostalloc`` and ``Context.mempin`` moved into it. This
class will also create its own ``allocations`` and ``deallocations`` members,
similarly to how the ``Context`` class creates them. These are used to manage
the allocations and deallocations of pinned and mapped host memory.
* ``NumbaCUDAMemoryManager``\ : A subclass of ``HostOnlyCUDAMemoryManager``\ , which
also contains an implementation of ``memalloc`` based on that presently existing
in the ``Context`` class. This is the default memory manager, and its use
preserves the behaviour of Numba prior to the addition of the EMM plugin
interface - that is, all memory allocation and deallocation for Numba arrays
is handled within Numba.
* This class shares the ``allocations`` and ``deallocations`` members with its
parent class ``HostOnlyCUDAMemoryManager``\ , and it uses these for the
management of device memory that it allocates.
* The ``set_memory_manager`` function, which sets a global pointing to the memory
manager class. This global initially holds ``NumbaCUDAMemoryManager`` (the
default).
Staged IPC
~~~~~~~~~~
Staged IPC should not take ownership of the memory that it allocates. When the
default internal memory manager is in use, the memory allocated for the staging
array is already owned. When an EMM plugin is in use, it is not legitimate to
take ownership of the memory.
This change can be made by applying the following small patch, which has been
tested to have no effect on the CUDA test suite:
.. code-block:: diff
diff --git a/numba/cuda/cudadrv/driver.py b/numba/cuda/cudadrv/driver.py
index 7832955..f2c1352 100644
--- a/numba/cuda/cudadrv/driver.py
+++ b/numba/cuda/cudadrv/driver.py
@@ -922,7 +922,11 @@ class _StagedIpcImpl(object):
with cuda.gpus[srcdev.id]:
impl.close()
- return newmem.own()
+ return newmem
Testing
~~~~~~~
Alongside the addition of appropriate tests for new functionality, there will be
some refactoring of existing tests required, but these changes are not
substantial. Tests of the deallocation strategy (e.g. ``TestDeallocation``\ ,
``TestDeferCleanup``\ ) will need to be modified to ensure that they are
examining the correct set of deallocations. When an EMM plugin is in use, they
will need to be skipped.
Prototyping / experimental implementation
-----------------------------------------
Some prototype / experimental implementations have been produced to guide the
designs presented in this document. The current implementations can be found in:
* Numba branch: https://github.com/gmarkall/numba/tree/grm-numba-nbep-7.
* RMM branch: https://github.com/gmarkall/rmm/tree/grm-numba-nbep-7.
* CuPy implementation:
https://github.com/gmarkall/nbep-7/blob/master/nbep7/cupy_mempool.py - uses
an unmodified CuPy.
* See `CuPy memory management
docs <https://docs-cupy.chainer.org/en/stable/reference/memory.html>`_.
Current implementation status
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RMM Plugin
~~~~~~~~~~
For a minimal example, a simple allocation and free using RMM works as expected.
For the example code (similar to the RMM example above):
.. code-block:: python
import rmm
import numpy as np
from numba import cuda
rmm.use_rmm_for_numba()
a = np.zeros(10)
d_a = cuda.to_device(a)
del(d_a)
print(rmm.csv_log())
We see the following output:
.. code-block::
Event Type,Device ID,Address,Stream,Size (bytes),Free Memory,Total Memory,Current Allocs,Start,End,Elapsed,Location
Alloc,0,0x7f96c7400000,0,80,0,0,1,1.13396,1.13576,0.00180059,<path>/numba/numba/cuda/cudadrv/driver.py:686
Free,0,0x7f96c7400000,0,0,0,0,0,1.13628,1.13723,0.000956004,<path>/numba/numba/utils.py:678
This output is similar to the expected output from the example usage presented
above (though note that the pointer addresses and timestamps vary compared to
the example), and provides some validation of the example use case.
CuPy Plugin
~~~~~~~~~~~
.. code-block:: python
from nbep7.cupy_mempool import use_cupy_mm_for_numba
import numpy as np
from numba import cuda
use_cupy_mm_for_numba()
a = np.zeros(10)
d_a = cuda.to_device(a)
del(d_a)
The prototype CuPy plugin has somewhat primitive logging, so we see the output:
.. code-block::
Allocated 80 bytes at 7f004d400000
Freeing 80 bytes at 7f004d400000
Numba CUDA Unit tests
^^^^^^^^^^^^^^^^^^^^^
As well as providing correct execution of a simple example, all relevant Numba
CUDA unit tests also pass with the prototype branch, for both the internal memory
manager and the RMM EMM Plugin.
RMM
~~~
The unit test suite can be run with the RMM EMM Plugin with:
.. code-block::
NUMBA_CUDA_MEMORY_MANAGER=rmm python -m numba.runtests numba.cuda.tests
A summary of the unit test suite output is:
.. code-block::
Ran 564 tests in 142.211s
OK (skipped=11)
When running with the built-in Numba memory management, the output is:
.. code-block::
Ran 564 tests in 133.396s
OK (skipped=5)
i.e. the changes for using an external memory manager do not break the built-in
Numba memory management. There are an additional 6 skipped tests, from:
* ``TestDeallocation``\ : skipped as it specifically tests Numba's internal
deallocation strategy.
* ``TestDeferCleanup``\ : skipped as it specifically tests Numba's implementation of
deferred cleanup.
* ``TestCudaArrayInterface.test_ownership``\ : skipped as Numba does not own memory
when an EMM Plugin is used, but ownership is assumed by this test case.
CuPy
~~~~
The test suite can be run with the CuPy plugin using:
.. code-block::
NUMBA_CUDA_MEMORY_MANAGER=nbep7.cupy_mempool python -m numba.runtests numba.cuda.tests
This plugin implementation is presently more primitive than the RMM
implementation, and results in some errors with the unit test suite:
.. code-block::
Ran 564 tests in 111.699s
FAILED (errors=8, skipped=11)
The 8 errors are due to a lack of implementation of ``get_ipc_handle`` in the
CuPy EMM Plugin implementation. It is expected that this implementation will be
re-visited and completed so that CuPy can be used stably as an allocator for
Numba in the future.
===========================
Numba Enhancement Proposals
===========================
Numba Enhancement Proposals (not really abbreviated "NEPs", since "NEP"
is already taken by the Numpy project) describe proposed changes to Numba.
They are modeled on Python Enhancement Proposals (PEPs) and Numpy Enhancement
Proposals, and are typically written up when important changes
(behavioural changes, feature additions...) to Numba are proposed.
This page provides an overview of all proposals, making only a distinction
between the ones that have been implemented and those that have not been
implemented.
Implemented proposals
---------------------
.. toctree::
:maxdepth: 1
integer-typing.rst
external-memory-management.rst
Other proposals
---------------
.. toctree::
:maxdepth: 1
extension-points.rst
jit-classes.rst
cfunc.rst
type-inference.rst
typing_recursion.rst
.. _nbep-1:
=================================
NBEP 1: Changes in integer typing
=================================
:Author: Antoine Pitrou
:Date: July 2015
:Status: Final
Current semantics
=================
Type inference of integers in Numba currently has some subtleties
and some corner cases. The simple case is when some variable has an obvious
Numba type (for example because it is the result of a constructor call to a
Numpy scalar type such as ``np.int64``). That case suffers no ambiguity.
The less simple case is when a variable doesn't bear such explicit
information. This can happen because it is inferred from a built-in Python
``int`` value, or from an arithmetic operation between two integers, or
other cases yet. Then Numba has a number of rules to infer the resulting
Numba type, especially its signedness and bitwidth.
Currently, the generic case could be summarized as: *start small,
grow bigger as required*. Concretely:
1. Each constant or pseudo-constant is inferred using the *smallest signed
integer type* that can correctly represent it (or, possibly, ``uint64``
for positive integers between ``2**63`` and ``2**64 - 1``).
2. The result of an operation is typed so as to ensure safe representation
in the face of overflow and other magnitude increases (for example,
``int32 + int32`` would be typed ``int64``).
3. As an exception, a Python ``int`` used as function argument is always
typed ``intp``, a pointer-size integer. This is to avoid the proliferation
of compiled specializations, as otherwise various integer bitwidths
in input arguments may produce multiple signatures.
.. note::
The second rule above (the "respect magnitude increases" rule)
reproduces Numpy's behaviour with arithmetic on scalar values.
Numba, however, has different implementation and performance constraints
than Numpy scalars.
It is worth nothing, by the way, that Numpy arrays do not implement
said rule (i.e. ``array(int32) + array(int32)`` is typed ``array(int32)``,
not ``array(int64)``). Probably because this makes performance more
controllable.
This has several non-obvious side-effects:
1. It is difficult to predict the precise type of a value inside a function,
after several operations. The basic operands in an expression tree
may for example be ``int8`` but the end result may be ``int64``. Whether
this is desirable or not is an open question; it is good for correctness,
but potentially bad for performance.
2. In trying to follow the correctness over predictability rule, some values
can actually leave the integer realm. For example, ``int64 + uint64``
is typed ``float64`` in order to avoid magnitude losses (but incidentally
will lose precision on large integer values...), again following Numpy's
semantics for scalars. This is usually not intended by the user.
3. More complicated scenarios can produce unexpected errors at the type unification
stage. An example is at `Github issue 1299 <https://github.com/numba/numba/issues/1299>`_,
the gist of which is reproduced here::
@jit(nopython=True)
def f():
variable = 0
for i in range(1):
variable = variable + 1
return np.arange(variable)
At the time of this writing, this fails compiling, on a 64-bit system,
with the error::
numba.errors.TypingError: Failed at nopython (nopython frontend)
Can't unify types of variable '$48.4': $48.4 := {array(int32, 1d, C), array(int64, 1d, C)}
People expert with Numba's type unification system can understand why.
But the user is caught in mystery.
Proposal: predictable width-conserving typing
=============================================
We propose to turn the current typing philosophy on its head. Instead
of "*start small and grow as required*", we propose "*start big and keep
the width unchanged*".
Concretely:
1. The typing of Python ``int`` values used as function arguments doesn't
change, as it works satisfyingly and doesn't surprise the user.
2. The typing of integer *constants* (and pseudo-constants) changes to match
the typing of integer arguments. That is, every non-explicitly typed
integer constant is typed ``intp``, the pointer-sized integer; except for
the rare cases where ``int64`` (on 32-bit systems) or ``uint64`` is
required.
3. Operations on integers promote bitwidth to ``intp``, if smaller, otherwise
they don't promote. For example, on a 32-bit machine, ``int8 + int8``
is typed ``int32``, as is ``int32 + int32``. However, ``int64 + int64``
is typed ``int64``.
4. Furthermore, mixed operations between signed and unsigned fall back to
signed, while following the same bitwidth rule. For example, on a
32-bit machine, ``int8 + uint16`` is typed ``int32``, as is
``uint32 + int32``.
Proposal impact
===============
Semantics
---------
With this proposal, the semantics become clearer. Regardless of whether
the arguments and constants of a function were explicitly typed or not,
the results of various expressions at any point in the function have
easily predictable types.
When using built-in Python ``int``, the user gets acceptable magnitude
(32 or 64 bits depending on the system's bitness), and the type remains
the same across all computations.
When explicitly using smaller bitwidths, intermediate results don't
suffer from magnitude loss, since their bitwidth is promoted to ``intp``.
There is also less potential for annoyances with the type unification
system as demonstrated above. The user would have to force several
different types to be faced with such an error.
One potential cause for concern is the discrepancy with Numpy's scalar
semantics; but at the same time this brings Numba scalar semantics closer
to array semantics (both Numba's and Numpy's), which seems a desirable
outcome as well.
It is worth pointing out that some sources of integer numbers, such
as the ``range()`` built-in, always yield 32-bit integers or larger.
This proposal could be an opportunity to standardize them on ``intp``.
Performance
-----------
Except in trivial cases, it seems unlikely that the current "best fit"
behaviour for integer constants really brings a performance benefit. After
all, most integers in Numba code would either be stored in arrays (with
well-known types, chosen by the user) or be used as indices, where a ``int8``
is highly unlikely to fare better than a ``intp`` (actually, it may be worse,
if LLVM isn't able to optimize away the required sign-extension).
As a side note, the default use of ``intp`` rather than ``int64``
ensures that 32-bit systems won't suffer from poor arithmetic performance.
Implementation
--------------
Optimistically, this proposal may simplify some Numba internals a bit.
Or, at least, it doesn't threaten to make them significantly more complicated.
Limitations
-----------
This proposal doesn't really solve the combination of signed and unsigned
integers. It is geared mostly at solving the bitwidth issues, which are
a somewhat common cause of pain for users. Unsigned integers are in
practice very uncommon in Numba-compiled code, except when explicitly
asked for, and therefore much less of a pain point.
On the bitwidth front, 32-bit systems could still show discrepancies based
on the values of constants: if a constant is too large to fit in 32 bits,
it is typed ``int64``, which propagates through other computations.
This would be a reminiscence of the current behaviour, but rarer and much
more controlled still.
Long-term horizon
-----------------
While we believe this proposal makes Numba's behaviour more regular and more
predictable, it also pulls it further from general compatibility with pure
Python semantics, where users can assume arbitrary-precision integers without
any truncation issues.
===================
NBEP 3: JIT Classes
===================
:Author: Siu Kwan Lam
:Date: Dec 2015
:Status: Draft
Introduction
============
Numba does not yet support user-defined classes.
Classes provide useful abstraction and promote modularity when used
right. In the simplest sense, a class specifies the set of data and
operations as attributes and methods, respectively.
A class instance is an instantiation of that class.
This proposal will focus on supporting this simple usecase of classes--with
just attributes and methods. Other features, such as class methods, static
methods, and inheritance are deferred to another proposal, but we believe
these features can be easily implemented given the foundation described here.
Proposal: jit-classes
=====================
A JIT-classes is more restricted than a Python class.
We will focus on the following operations on a class and its instance:
* Instantiation: create an instance of a class using the class object as the
constructor: ``cls(*args, **kwargs)``
* Destruction: remove resources allocated during instantiation and release
all references to other objects.
* Attribute access: loading and storing attributes using ``instance.attr``
syntax.
* Method access: loading methods using ``instance.method`` syntax.
With these operations, a class object (not the instance) does not need to be
materialize. Using the class object as a constructor is fully resolved (a
runtime implementation is picked) during the typing phase in the compiler.
This means **a class object will not be first class**. On the other hand,
implementing a first-class class object will require an "interface" type,
or the type of class.
The instantiation of a class will allocate resources for storing the data
attributes. This is described in the "Storage model" section. Methods are
never stored in the instance. They are information attached to the class.
Since a class object only exists in the type domain, the methods will also be
fully resolved at the typing phase. Again, numba do not have first-class
function value and each function type maps uniquely to each function
implementation (this needs to be changed to support function value as argument).
A class instance can contain other NRT reference-counted object as attributes.
To properly clean up an instance, a destructor is called when the reference
count of the instance is dropped to zero. This is described in the
"Reference count and descructor" section.
Storage model
~~~~~~~~~~~~~
For compatibility with C, attributes are stored in a simple plain-old-data
structure. Each attribute are stored in a user-defined order in a padded
(for proper alignment), contiguous memory region. An instance that contains
three fields of int32, float32, complex64 will be compatible with the following
C structure::
struct {
int32 field0;
float32 field1;
complex64 field2;
};
This will also be compatible with an aligned NumPy structured dtype.
Methods
~~~~~~~
Methods are regular function that can be bounded to an instance.
They can be compiled as regular function by numba.
The operation ``getattr(instance, name)`` (getting an attribute ``name`` from
``instance``) binds the instance to the requested method at runtime.
The special ``__init__`` method is also handled like regular functions.
``__del__`` is not supported at this time.
Reference count and destructor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An instance of jit-class is reference-counted by NRT. Since it may contain
other NRT tracked object, it must call a destructor when its reference count
dropped to zero. The destructor will decrement the reference count of all
attributes by one.
At this time, there is no support for user defined ``__del__`` method.
Proper cleanup for cyclic reference is not handled at this time.
Cycles will cause memory leak.
Type inference
~~~~~~~~~~~~~~
So far we have not described the type of the attributes or the methods.
Type information is necessary to materailize the instance (e.g. allocate the
storage). The simplest way is to let user provide the type of each attributes
as well as the ordering; for instance::
dct = OrderedDict()
dct['x'] = int32
dct['y'] = float32
Allowing user to supply an ordered dictionary will provide the name, ordering
and types of the attributes. However, this statically typed semantic is not as
flexible as the Python semantic which behaves like a generic class.
Inferring the type of attributes is difficult. In a previous attempt to
implement JIT classes, the ``__init__`` method is specialized to capture
the type stored into the attributes. Since the method can contain arbitrary
logic, the problem can become a dependent typing problem if types are assigned
conditionally depending on the value. (Very few languages implement dependent
typing and those that does are mostly theorem provers.)
Example: typing function using an OrderedDict
---------------------------------------------
.. code-block:: python
spec = OrderedDict()
spec['x'] = numba.int32
spec['y'] = numba.float32
@jitclass(spec)
class Vec(object):
def __init__(self, x, y):
self.x = x
self.y = y
def add(self, dx, dy):
self.x += dx
self.y += dy
Example: typing function using a list of 2-tuples
-------------------------------------------------
.. code-block:: python
spec = [('x', numba.int32),
('y', numba.float32)]
@jitclass(spec)
class Vec(object):
...
Creating multiple jitclasses from a single class object
-------------------------------------------------------
The `jitclass(spec)` decorator creates a new jitclass type even when applied to
the same class object and the same type specification.
.. code-block:: python
class Vec(object):
...
Vec1 = jitclass(spec)(Vec)
Vec2 = jitclass(spec)(Vec)
# Vec1 and Vec2 are two different jitclass types
Usage from the Interpreter
~~~~~~~~~~~~~~~~~~~~~~~~~~
When constructing a new instance of a jitclass, a "box" is created that wraps
the underlying jitclass instance from numba. Attributes and methods are
accessible from the interpreter. The actual implementation will be in numba
compiled code. Any Python object is converted to its native
representation for consumption in numba. Similarly, the returned value is
converted to its Python representation. As a result, there may be overhead in
manipulating jitclass instances in the interpreter. This overhead is minimal
and should be easily amortized by more efficient computation in the compiled
methods.
Support for property, staticmethod and classmethod
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The use of ``property`` is accepted for getter and setter only. Deleter is not
supported.
The use of ``staticmethod`` is not supported.
The use of ``classmethod`` is not supported.
Inheritance
~~~~~~~~~~~
Class inhertance is not considered in this proposal. The only accepted base
class for a jitclass is `object`.
Supported targets
~~~~~~~~~~~~~~~~~~
Only the CPU target (including the parallel target) is supported.
GPUs (e.g. CUDA and HSA) targets are supported via an immutable version of the
jitclass instance, which will be described in a separate NBEP.
Other properties
~~~~~~~~~~~~~~~~
Given:
.. code-block:: python
spec = [('x', numba.int32),
('y', numba.float32)]
@jitclass(spec)
class Vec(object):
...
* ``isinstance(Vec(1, 2), Vec)`` is True.
* ``type(Vec(1, 2))`` may not be ``Vec``.
Future enhancements
~~~~~~~~~~~~~~~~~~~
This proposal has only described the basic semantic and functionality of a
jitclass. Additional features will be described in future enhancement
proposals.
import numpy as np
from numba.core import types
from numba.extending import overload
@overload(np.where)
def where(cond, x, y):
"""
Implement np.where().
"""
# Choose implementation based on argument types.
if isinstance(cond, types.Array):
# Array where() => return an array of the same shape
if all(ty.layout == 'C' for ty in (cond, x, y)):
def where_impl(cond, x, y):
"""
Fast implementation for C-contiguous arrays
"""
shape = cond.shape
if x.shape != shape or y.shape != shape:
raise ValueError("all inputs should have the same shape")
res = np.empty_like(x)
cf = cond.flat
xf = x.flat
yf = y.flat
rf = res.flat
for i in range(cond.size):
rf[i] = xf[i] if cf[i] else yf[i]
return res
else:
def where_impl(cond, x, y):
"""
Generic implementation for other arrays
"""
shape = cond.shape
if x.shape != shape or y.shape != shape:
raise ValueError("all inputs should have the same shape")
res = np.empty_like(x)
for idx, c in np.ndenumerate(cond):
res[idx] = x[idx] if c else y[idx]
return res
else:
def where_impl(cond, x, y):
"""
Scalar where() => return a 0-dim array
"""
scal = x if cond else y
return np.full_like(scal, scal)
return where_impl
<?xml version="1.0" standalone="yes"?>
<svg version="1.1" viewBox="0.0 0.0 504.17847769028873 133.43307086614172" fill="none" stroke="none" stroke-linecap="square" stroke-miterlimit="10" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><clipPath id="p.0"><path d="m0 0l504.17847 0l0 133.43307l-504.17847 0l0 -133.43307z" clip-rule="nonzero"></path></clipPath><g clip-path="url(#p.0)"><path fill="#000000" fill-opacity="0.0" d="m0 0l504.17847 0l0 133.43307l-504.17847 0z" fill-rule="nonzero"></path><path fill="#cfe2f3" d="m135.38583 36.97638l133.79526 0l0 36.97638l-133.79526 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m135.38583 36.97638l133.79526 0l0 36.97638l-133.79526 0z" fill-rule="nonzero"></path><path fill="#000000" d="m174.90775 62.384567l0 -8.546875l-1.484375 0l0 -1.3125l1.484375 0l0 -1.046875q0 -0.984375 0.171875 -1.46875q0.234375 -0.65625 0.84375 -1.046875q0.609375 -0.40625 1.703125 -0.40625q0.703125 0 1.5625 0.15625l-0.25 1.46875q-0.515625 -0.09375 -0.984375 -0.09375q-0.765625 0 -1.078125 0.328125q-0.3125 0.3125 -0.3125 1.203125l0 0.90625l1.921875 0l0 1.3125l-1.921875 0l0 8.546875l-1.65625 0zm4.152054 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm8.656967 0q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm12.469467 8.921875q-1.375 -1.75 -2.328125 -4.078125q-0.953125 -2.34375 -0.953125 -4.84375q0 -2.21875 0.703125 -4.234375q0.84375 -2.34375 2.578125 -4.671875l1.203125 0q-1.125 1.921875 -1.484375 2.75q-0.5625 1.28125 -0.890625 2.671875q-0.40625 1.734375 -0.40625 3.484375q0 4.46875 2.78125 8.921875l-1.203125 0zm3.040802 -15.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.129196 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm14.031967 -1.5l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm2.6208038 5.5l-1.1875 0q2.765625 -4.453125 2.765625 -8.921875q0 -1.734375 -0.390625 -3.453125q-0.328125 -1.390625 -0.890625 -2.671875q-0.359375 -0.84375 -1.484375 -2.78125l1.1875 0q1.75 2.328125 2.578125 4.671875q0.71875 2.015625 0.71875 4.234375q0 2.5 -0.96875 4.84375q-0.953125 2.328125 -2.328125 4.078125z" fill-rule="nonzero"></path><path fill="#cfe2f3" d="m135.38583 73.95276l133.79526 0l0 36.97637l-133.79526 0z" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m135.38583 73.95276l133.79526 0l0 36.97637l-133.79526 0z" fill-rule="nonzero"></path><path fill="#000000" d="m171.40883 99.36095l0 -13.59375l2.609375 0l0 4.890625q1.203125 -1.375 2.859375 -1.375q1.796875 0 2.96875 1.3125q1.1875 1.296875 1.1875 3.734375q0 2.53125 -1.203125 3.890625q-1.203125 1.359375 -2.921875 1.359375q-0.84375 0 -1.671875 -0.421875q-0.8125 -0.421875 -1.40625 -1.25l0 1.453125l-2.421875 0zm2.59375 -5.140625q0 1.53125 0.484375 2.265625q0.671875 1.03125 1.796875 1.03125q0.859375 0 1.46875 -0.734375q0.609375 -0.734375 0.609375 -2.328125q0 -1.6875 -0.609375 -2.421875q-0.609375 -0.75 -1.578125 -0.75q-0.9375 0 -1.5625 0.734375q-0.609375 0.71875 -0.609375 2.203125zm10.864731 -1.703125l-2.359375 -0.4375q0.390625 -1.421875 1.359375 -2.109375q0.984375 -0.6875 2.90625 -0.6875q1.734375 0 2.59375 0.421875q0.859375 0.40625 1.203125 1.046875q0.34375 0.625 0.34375 2.328125l-0.03125 3.046875q0 1.296875 0.125 1.921875q0.125 0.609375 0.46875 1.3125l-2.578125 0q-0.09375 -0.265625 -0.25 -0.765625q-0.0625 -0.234375 -0.09375 -0.3125q-0.65625 0.65625 -1.421875 0.984375q-0.765625 0.3125 -1.625 0.3125q-1.515625 0 -2.40625 -0.8125q-0.875 -0.828125 -0.875 -2.09375q0 -0.84375 0.390625 -1.484375q0.40625 -0.65625 1.125 -1.0q0.71875 -0.359375 2.078125 -0.625q1.828125 -0.328125 2.53125 -0.625l0 -0.265625q0 -0.75 -0.375 -1.0625q-0.359375 -0.328125 -1.390625 -0.328125q-0.703125 0 -1.09375 0.28125q-0.390625 0.265625 -0.625 0.953125zm3.484375 2.109375q-0.5 0.171875 -1.59375 0.40625q-1.078125 0.234375 -1.40625 0.453125q-0.515625 0.359375 -0.515625 0.921875q0 0.546875 0.40625 0.953125q0.40625 0.390625 1.046875 0.390625q0.703125 0 1.34375 -0.46875q0.46875 -0.359375 0.625 -0.859375q0.09375 -0.34375 0.09375 -1.28125l0 -0.515625zm7.438217 4.734375l-2.609375 0l0 -9.859375l2.421875 0l0 1.40625q0.625 -0.984375 1.109375 -1.296875q0.5 -0.328125 1.140625 -0.328125q0.890625 0 1.71875 0.5l-0.8125 2.265625q-0.65625 -0.421875 -1.21875 -0.421875q-0.546875 0 -0.9375 0.296875q-0.375 0.296875 -0.59375 1.09375q-0.21875 0.78125 -0.21875 3.296875l0 3.046875zm9.088394 4.0l-1.796875 0q-1.40625 -2.140625 -2.15625 -4.453125q-0.734375 -2.3125 -0.734375 -4.46875q0 -2.6875 0.90625 -5.078125q0.796875 -2.078125 2.03125 -3.828125l1.78125 0q-1.28125 2.8125 -1.765625 4.78125q-0.46875 1.96875 -0.46875 4.171875q0 1.53125 0.28125 3.125q0.28125 1.59375 0.78125 3.03125q0.328125 0.953125 1.140625 2.71875zm1.884552 -15.1875l0 -2.40625l2.609375 0l0 2.40625l-2.609375 0zm0 11.1875l0 -9.859375l2.609375 0l0 9.859375l-2.609375 0zm14.152054 0l-2.609375 0l0 -5.03125q0 -1.59375 -0.171875 -2.0625q-0.15625 -0.46875 -0.53125 -0.71875q-0.375 -0.265625 -0.90625 -0.265625q-0.6875 0 -1.234375 0.375q-0.53125 0.359375 -0.734375 0.984375q-0.1875 0.609375 -0.1875 2.25l0 4.46875l-2.609375 0l0 -9.859375l2.421875 0l0 1.453125q1.296875 -1.671875 3.25 -1.671875q0.859375 0 1.578125 0.3125q0.71875 0.3125 1.078125 0.796875q0.359375 0.484375 0.5 1.09375q0.15625 0.609375 0.15625 1.75l0 6.125zm6.942856 -9.859375l0 2.078125l-1.78125 0l0 3.984375q0 1.203125 0.046875 1.40625q0.0625 0.1875 0.234375 0.328125q0.1875 0.125 0.453125 0.125q0.359375 0 1.046875 -0.25l0.21875 2.015625q-0.90625 0.390625 -2.0625 0.390625q-0.703125 0 -1.265625 -0.234375q-0.5625 -0.234375 -0.828125 -0.609375q-0.265625 -0.375 -0.375 -1.015625q-0.078125 -0.453125 -0.078125 -1.84375l0 -4.296875l-1.203125 0l0 -2.078125l1.203125 0l0 -1.953125l2.609375 -1.515625l0 3.46875l1.78125 0zm0.978302 13.859375q0.765625 -1.65625 1.078125 -2.546875q0.328125 -0.875 0.59375 -2.015625q0.265625 -1.15625 0.390625 -2.1875q0.140625 -1.03125 0.140625 -2.125q0 -2.203125 -0.484375 -4.171875q-0.46875 -1.96875 -1.734375 -4.78125l1.765625 0q1.40625 1.984375 2.171875 4.234375q0.78125 2.234375 0.78125 4.53125q0 1.9375 -0.609375 4.15625q-0.703125 2.484375 -2.296875 4.90625l-1.796875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m80.89764 0l242.77164 0l0 36.97638l-242.77164 0z" fill-rule="nonzero"></path><path fill="#000000" d="m117.55966 22.154373l1.796875 0.453125q-0.5625 2.21875 -2.03125 3.390625q-1.46875 1.15625 -3.59375 1.15625q-2.203125 0 -3.578125 -0.890625q-1.375 -0.90625 -2.09375 -2.59375q-0.71875 -1.703125 -0.71875 -3.65625q0 -2.125 0.796875 -3.703125q0.8125 -1.578125 2.3125 -2.390625q1.5 -0.828125 3.296875 -0.828125q2.046875 0 3.4375 1.046875q1.390625 1.03125 1.9375 2.90625l-1.765625 0.421875q-0.46875 -1.484375 -1.375 -2.15625q-0.90625 -0.6875 -2.265625 -0.6875q-1.5625 0 -2.625 0.75q-1.046875 0.75 -1.484375 2.03125q-0.421875 1.265625 -0.421875 2.609375q0 1.734375 0.5 3.03125q0.515625 1.28125 1.578125 1.921875q1.078125 0.640625 2.3125 0.640625q1.515625 0 2.5625 -0.859375q1.046875 -0.875 1.421875 -2.59375zm2.9260712 -0.15625q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281967 4.921875l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm15.540802 3.78125l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm8.875717 -6.8125l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.097946 0l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm10.926071 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm8.469467 1.796875l0 -1.6875l5.125 0l0 1.6875l-5.125 0zm10.509552 2.578125l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.5426788 -10.1875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.129196 0l0 -9.859375l1.5 0l0 1.390625q0.453125 -0.71875 1.21875 -1.15625q0.78125 -0.453125 1.765625 -0.453125q1.09375 0 1.796875 0.453125q0.703125 0.453125 0.984375 1.28125q1.171875 -1.734375 3.046875 -1.734375q1.46875 0 2.25 0.8125q0.796875 0.8125 0.796875 2.5l0 6.765625l-1.671875 0l0 -6.203125q0 -1.0 -0.15625 -1.4375q-0.15625 -0.453125 -0.59375 -0.71875q-0.421875 -0.265625 -1.0 -0.265625q-1.03125 0 -1.71875 0.6875q-0.6875 0.6875 -0.6875 2.21875l0 5.71875l-1.671875 0l0 -6.40625q0 -1.109375 -0.40625 -1.65625q-0.40625 -0.5625 -1.34375 -0.5625q-0.703125 0 -1.3125 0.375q-0.59375 0.359375 -0.859375 1.078125q-0.265625 0.71875 -0.265625 2.0625l0 5.109375l-1.671875 0zm22.290802 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm20.730896 2.265625l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.328125 2.390625q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm4.0476074 4.9375l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm4.144806 0l0 -13.59375l1.671875 0l0 13.59375l-1.671875 0zm8.6875 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm13.65625 1.4375l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm7.964569 0.28125q-0.9375 0.796875 -1.796875 1.125q-0.859375 0.3125 -1.84375 0.3125q-1.609375 0 -2.484375 -0.78125q-0.875 -0.796875 -0.875 -2.03125q0 -0.734375 0.328125 -1.328125q0.328125 -0.59375 0.859375 -0.953125q0.53125 -0.359375 1.203125 -0.546875q0.5 -0.140625 1.484375 -0.25q2.03125 -0.25 2.984375 -0.578125q0 -0.34375 0 -0.4375q0 -1.015625 -0.46875 -1.4375q-0.640625 -0.5625 -1.90625 -0.5625q-1.171875 0 -1.734375 0.40625q-0.5625 0.40625 -0.828125 1.46875l-1.640625 -0.234375q0.234375 -1.046875 0.734375 -1.6875q0.515625 -0.640625 1.46875 -0.984375q0.96875 -0.359375 2.25 -0.359375q1.265625 0 2.046875 0.296875q0.78125 0.296875 1.15625 0.75q0.375 0.453125 0.515625 1.140625q0.09375 0.421875 0.09375 1.53125l0 2.234375q0 2.328125 0.09375 2.953125q0.109375 0.609375 0.4375 1.171875l-1.75 0q-0.265625 -0.515625 -0.328125 -1.21875zm-0.140625 -3.71875q-0.90625 0.359375 -2.734375 0.625q-1.03125 0.140625 -1.453125 0.328125q-0.421875 0.1875 -0.65625 0.546875q-0.234375 0.359375 -0.234375 0.796875q0 0.671875 0.5 1.125q0.515625 0.4375 1.484375 0.4375q0.96875 0 1.71875 -0.421875q0.75 -0.4375 1.109375 -1.15625q0.265625 -0.578125 0.265625 -1.671875l0 -0.609375zm10.516327 1.328125l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm2.90625 3.609375l0 -13.59375l1.671875 0l0 7.75l3.953125 -4.015625l2.15625 0l-3.765625 3.65625l4.140625 6.203125l-2.0625 0l-3.25 -5.03125l-1.171875 1.125l0 3.90625l-1.671875 0z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m269.1811 36.97638l263.33862 0l0 36.97638l-263.33862 0z" fill-rule="nonzero"></path><path fill="#000000" d="m283.08734 62.396378l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.4489441 5.296875l-0.171875 -1.5625q0.546875 0.140625 0.953125 0.140625q0.546875 0 0.875 -0.1875q0.34375 -0.1875 0.5625 -0.515625q0.15625 -0.25 0.5 -1.25q0.046875 -0.140625 0.15625 -0.40625l-3.734375 -9.875l1.796875 0l2.046875 5.71875q0.40625 1.078125 0.71875 2.28125q0.28125 -1.15625 0.6875 -2.25l2.09375 -5.75l1.671875 0l-3.75 10.03125q-0.59375 1.625 -0.9375 2.234375q-0.4375 0.828125 -1.015625 1.203125q-0.578125 0.390625 -1.375 0.390625q-0.484375 0 -1.078125 -0.203125zm9.40625 -0.015625l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm15.610077 1.703125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm14.309021 -5.8125l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm10.781952 0l0 -8.546875l-1.484375 0l0 -1.3125l1.484375 0l0 -1.046875q0 -0.984375 0.171875 -1.46875q0.234375 -0.65625 0.84375 -1.046875q0.609375 -0.40625 1.703125 -0.40625q0.703125 0 1.5625 0.15625l-0.25 1.46875q-0.515625 -0.09375 -0.984375 -0.09375q-0.765625 0 -1.078125 0.328125q-0.3125 0.3125 -0.3125 1.203125l0 0.90625l1.921875 0l0 1.3125l-1.921875 0l0 8.546875l-1.65625 0zm11.527069 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094452 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.978302 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.813202 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm13.621521 2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm16.453125 2.9375l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.2507324 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.0 6.71875l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm15.610077 1.703125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.766327 0l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375zm16.016357 1.75l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.500702 5.875l0 -1.25q-0.9375 1.46875 -2.75 1.46875q-1.171875 0 -2.171875 -0.640625q-0.984375 -0.65625 -1.53125 -1.8125q-0.53125 -1.171875 -0.53125 -2.6875q0 -1.46875 0.484375 -2.671875q0.5 -1.203125 1.46875 -1.84375q0.984375 -0.640625 2.203125 -0.640625q0.890625 0 1.578125 0.375q0.703125 0.375 1.140625 0.984375l0 -4.875l1.65625 0l0 13.59375l-1.546875 0zm-5.28125 -4.921875q0 1.890625 0.796875 2.828125q0.8125 0.9375 1.890625 0.9375q1.09375 0 1.859375 -0.890625q0.765625 -0.890625 0.765625 -2.734375q0 -2.015625 -0.78125 -2.953125q-0.78125 -0.953125 -1.921875 -0.953125q-1.109375 0 -1.859375 0.90625q-0.75 0.90625 -0.75 2.859375z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m135.38583 92.44095c-12.500008 0 -25.007881 -9.244095 -25.000008 -18.48819c0.007873535 -9.244095 12.531494 -18.488194 25.062996 -18.488194" fill-rule="nonzero"></path><path stroke="#000000" stroke-width="1.0" stroke-linejoin="round" stroke-linecap="butt" d="m135.38583 92.44095c-12.500008 0 -25.007881 -9.244095 -25.000008 -18.48819c0.0039367676 -4.6220474 3.1368103 -9.244095 7.8351364 -12.710632c2.3491669 -1.7332687 5.089691 -3.177658 8.026146 -4.188732c1.4682312 -0.5055351 2.9854355 -0.9027405 4.527199 -1.1735649c0.385437 -0.06770706 0.77241516 -0.12751389 1.1605377 -0.17913818l0.09425354 -0.011520386" fill-rule="evenodd"></path><path fill="#000000" stroke="#000000" stroke-width="1.0" stroke-linecap="butt" d="m132.0291 55.68917l-1.0484619 1.1958656l3.00943 -1.3246613l-3.1568298 -0.9196701z" fill-rule="evenodd"></path><path fill="#000000" fill-opacity="0.0" d="m269.1811 73.95276l263.33862 0l0 36.97637l-263.33862 0z" fill-rule="nonzero"></path><path fill="#000000" d="m283.08734 99.37276l0.234375 1.484375q-0.703125 0.140625 -1.265625 0.140625q-0.90625 0 -1.40625 -0.28125q-0.5 -0.296875 -0.703125 -0.75q-0.203125 -0.46875 -0.203125 -1.984375l0 -5.65625l-1.234375 0l0 -1.3125l1.234375 0l0 -2.4375l1.65625 -1.0l0 3.4375l1.6875 0l0 1.3125l-1.6875 0l0 5.75q0 0.71875 0.078125 0.921875q0.09375 0.203125 0.296875 0.328125q0.203125 0.125 0.578125 0.125q0.265625 0 0.734375 -0.078125zm1.4489441 5.296875l-0.171875 -1.5625q0.546875 0.140625 0.953125 0.140625q0.546875 0 0.875 -0.1875q0.34375 -0.1875 0.5625 -0.515625q0.15625 -0.25 0.5 -1.25q0.046875 -0.140625 0.15625 -0.40625l-3.734375 -9.875l1.796875 0l2.046875 5.71875q0.40625 1.078125 0.71875 2.28125q0.28125 -1.15625 0.6875 -2.25l2.09375 -5.75l1.671875 0l-3.75 10.03125q-0.59375 1.625 -0.9375 2.234375q-0.4375 0.828125 -1.015625 1.203125q-0.578125 0.390625 -1.375 0.390625q-0.484375 0 -1.078125 -0.203125zm9.40625 -0.015625l0 -13.640625l1.53125 0l0 1.28125q0.53125 -0.75 1.203125 -1.125q0.6875 -0.375 1.640625 -0.375q1.265625 0 2.234375 0.65625q0.96875 0.640625 1.453125 1.828125q0.5 1.1875 0.5 2.59375q0 1.515625 -0.546875 2.734375q-0.546875 1.203125 -1.578125 1.84375q-1.03125 0.640625 -2.171875 0.640625q-0.84375 0 -1.515625 -0.34375q-0.65625 -0.359375 -1.078125 -0.890625l0 4.796875l-1.671875 0zm1.515625 -8.65625q0 1.90625 0.765625 2.8125q0.78125 0.90625 1.875 0.90625q1.109375 0 1.890625 -0.9375q0.796875 -0.9375 0.796875 -2.921875q0 -1.875 -0.78125 -2.8125q-0.765625 -0.9375 -1.84375 -0.9375q-1.0625 0 -1.890625 1.0q-0.8125 1.0 -0.8125 2.890625zm15.610077 1.703125l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm14.309021 -5.8125l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.1292114 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm10.781952 0l0 -8.546875l-1.484375 0l0 -1.3125l1.484375 0l0 -1.046875q0 -0.984375 0.171875 -1.46875q0.234375 -0.65625 0.84375 -1.046875q0.609375 -0.40625 1.703125 -0.40625q0.703125 0 1.5625 0.15625l-0.25 1.46875q-0.515625 -0.09375 -0.984375 -0.09375q-0.765625 0 -1.078125 0.328125q-0.3125 0.3125 -0.3125 1.203125l0 0.90625l1.921875 0l0 1.3125l-1.921875 0l0 8.546875l-1.65625 0zm11.527069 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.094452 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.978302 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm9.110107 5.875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm16.813202 -3.609375l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.640625 0.4375l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm14.277771 5.875l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.681427 0l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.9226074 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm10.375702 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm10.391357 -11.6875l0 -1.90625l1.671875 0l0 1.90625l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm4.129181 0l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0zm10.078857 0.8125l1.609375 0.25q0.109375 0.75 0.578125 1.09375q0.609375 0.453125 1.6875 0.453125q1.171875 0 1.796875 -0.46875q0.625 -0.453125 0.859375 -1.28125q0.125 -0.515625 0.109375 -2.15625q-1.09375 1.296875 -2.71875 1.296875q-2.03125 0 -3.15625 -1.46875q-1.109375 -1.46875 -1.109375 -3.515625q0 -1.40625 0.515625 -2.59375q0.515625 -1.203125 1.484375 -1.84375q0.96875 -0.65625 2.265625 -0.65625q1.75 0 2.875 1.40625l0 -1.1875l1.546875 0l0 8.515625q0 2.3125 -0.46875 3.265625q-0.46875 0.96875 -1.484375 1.515625q-1.015625 0.5625 -2.5 0.5625q-1.765625 0 -2.859375 -0.796875q-1.078125 -0.796875 -1.03125 -2.390625zm1.375 -5.921875q0 1.953125 0.765625 2.84375q0.78125 0.890625 1.9375 0.890625q1.140625 0 1.921875 -0.890625q0.78125 -0.890625 0.78125 -2.78125q0 -1.8125 -0.8125 -2.71875q-0.796875 -0.921875 -1.921875 -0.921875q-1.109375 0 -1.890625 0.90625q-0.78125 0.890625 -0.78125 2.671875z" fill-rule="nonzero"></path><path fill="#000000" fill-opacity="0.0" d="m0 49.952755l104.56693 0l0 36.976376l-104.56693 0z" fill-rule="nonzero"></path><path fill="#000000" d="m20.072014 76.87276l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm12.978302 -3.171875l1.71875 0.21875q-0.40625 1.5 -1.515625 2.34375q-1.09375 0.828125 -2.8125 0.828125q-2.15625 0 -3.421875 -1.328125q-1.265625 -1.328125 -1.265625 -3.734375q0 -2.484375 1.265625 -3.859375q1.28125 -1.375 3.328125 -1.375q1.984375 0 3.234375 1.34375q1.25 1.34375 1.25 3.796875q0 0.140625 -0.015625 0.4375l-7.34375 0q0.09375 1.625 0.921875 2.484375q0.828125 0.859375 2.0625 0.859375q0.90625 0 1.546875 -0.46875q0.65625 -0.484375 1.046875 -1.546875zm-5.484375 -2.703125l5.5 0q-0.109375 -1.234375 -0.625 -1.859375q-0.796875 -0.96875 -2.078125 -0.96875q-1.140625 0 -1.9375 0.78125q-0.78125 0.765625 -0.859375 2.046875zm15.547592 2.265625l1.640625 0.21875q-0.265625 1.6875 -1.375 2.65625q-1.109375 0.953125 -2.734375 0.953125q-2.015625 0 -3.25 -1.3125q-1.21875 -1.328125 -1.21875 -3.796875q0 -1.59375 0.515625 -2.78125q0.53125 -1.203125 1.609375 -1.796875q1.09375 -0.609375 2.359375 -0.609375q1.609375 0 2.625 0.8125q1.015625 0.8125 1.3125 2.3125l-1.625 0.25q-0.234375 -1.0 -0.828125 -1.5q-0.59375 -0.5 -1.421875 -0.5q-1.265625 0 -2.0625 0.90625q-0.78125 0.90625 -0.78125 2.859375q0 1.984375 0.765625 2.890625q0.765625 0.890625 1.984375 0.890625q0.984375 0 1.640625 -0.59375q0.65625 -0.609375 0.84375 -1.859375zm9.34375 3.609375l0 -1.453125q-1.140625 1.671875 -3.125 1.671875q-0.859375 0 -1.625 -0.328125q-0.75 -0.34375 -1.125 -0.84375q-0.359375 -0.5 -0.515625 -1.234375q-0.09375 -0.5 -0.09375 -1.5625l0 -6.109375l1.671875 0l0 5.46875q0 1.3125 0.09375 1.765625q0.15625 0.65625 0.671875 1.03125q0.515625 0.375 1.265625 0.375q0.75 0 1.40625 -0.375q0.65625 -0.390625 0.921875 -1.046875q0.28125 -0.671875 0.28125 -1.9375l0 -5.28125l1.671875 0l0 9.859375l-1.5 0zm3.9069672 0l0 -9.859375l1.5 0l0 1.5q0.578125 -1.046875 1.0625 -1.375q0.484375 -0.34375 1.078125 -0.34375q0.84375 0 1.71875 0.546875l-0.578125 1.546875q-0.609375 -0.359375 -1.234375 -0.359375q-0.546875 0 -0.984375 0.328125q-0.421875 0.328125 -0.609375 0.90625q-0.28125 0.890625 -0.28125 1.953125l0 5.15625l-1.671875 0zm5.556427 -2.9375l1.65625 -0.265625q0.140625 1.0 0.765625 1.53125q0.640625 0.515625 1.78125 0.515625q1.15625 0 1.703125 -0.46875q0.5625 -0.46875 0.5625 -1.09375q0 -0.5625 -0.484375 -0.890625q-0.34375 -0.21875 -1.703125 -0.5625q-1.84375 -0.46875 -2.5625 -0.796875q-0.703125 -0.34375 -1.078125 -0.9375q-0.359375 -0.609375 -0.359375 -1.328125q0 -0.65625 0.296875 -1.21875q0.3125 -0.5625 0.828125 -0.9375q0.390625 -0.28125 1.0625 -0.484375q0.671875 -0.203125 1.4375 -0.203125q1.171875 0 2.046875 0.34375q0.875 0.328125 1.28125 0.90625q0.421875 0.5625 0.578125 1.515625l-1.625 0.21875q-0.109375 -0.75 -0.65625 -1.171875q-0.53125 -0.4375 -1.5 -0.4375q-1.15625 0 -1.640625 0.390625q-0.484375 0.375 -0.484375 0.875q0 0.328125 0.203125 0.59375q0.203125 0.265625 0.640625 0.4375q0.25 0.09375 1.46875 0.4375q1.765625 0.46875 2.46875 0.765625q0.703125 0.296875 1.09375 0.875q0.40625 0.578125 0.40625 1.4375q0 0.828125 -0.484375 1.578125q-0.484375 0.734375 -1.40625 1.140625q-0.921875 0.390625 -2.078125 0.390625q-1.921875 0 -2.9375 -0.796875q-1.0 -0.796875 -1.28125 -2.359375zm10.015625 -8.75l0 -1.9062538l1.671875 0l0 1.9062538l-1.671875 0zm0 11.6875l0 -9.859375l1.671875 0l0 9.859375l-1.671875 0zm3.5041962 -4.921875q0 -2.734375 1.53125 -4.0625q1.265625 -1.09375 3.09375 -1.09375q2.03125 0 3.3125 1.34375q1.296875 1.328125 1.296875 3.671875q0 1.90625 -0.578125 3.0q-0.5625 1.078125 -1.65625 1.6875q-1.078125 0.59375 -2.375 0.59375q-2.0625 0 -3.34375 -1.328125q-1.28125 -1.328125 -1.28125 -3.8125zm1.71875 0q0 1.890625 0.828125 2.828125q0.828125 0.9375 2.078125 0.9375q1.25 0 2.0625 -0.9375q0.828125 -0.953125 0.828125 -2.890625q0 -1.828125 -0.828125 -2.765625q-0.828125 -0.9375 -2.0625 -0.9375q-1.25 0 -2.078125 0.9375q-0.828125 0.9375 -0.828125 2.828125zm9.281967 4.921875l0 -9.859375l1.5 0l0 1.40625q1.09375 -1.625 3.140625 -1.625q0.890625 0 1.640625 0.328125q0.75 0.3125 1.109375 0.84375q0.375 0.515625 0.53125 1.21875q0.09375 0.46875 0.09375 1.625l0 6.0625l-1.671875 0l0 -6.0q0 -1.015625 -0.203125 -1.515625q-0.1875 -0.515625 -0.6875 -0.8125q-0.5 -0.296875 -1.171875 -0.296875q-1.0625 0 -1.84375 0.671875q-0.765625 0.671875 -0.765625 2.578125l0 5.375l-1.671875 0z" fill-rule="nonzero"></path></g></svg>
======================
NBEP 5: Type Inference
======================
:Author: Siu Kwan Lam
:Date: Sept 2016
:Status: Draft
This document describes the current type inference implementation in numba.
Introduction
============
Numba uses type information to ensure that every variable in the user code can
be correctly lowered (translated into a low-level representation). The type of
a variable describes the set of valid operations and available attributes.
Resolving this information during compilation avoids the overhead of type
checking and dispatching at runtime. However, Python is dynamically typed and
the user does not declare variable types. Since type information is absent,
we use type inference to reconstruct the missing information.
Numba Type Semantic
===================
Type inference operates on :term:`Numba IR`, a mostly static-single-assignment (SSA)
encoding of the Python bytecode. Conceptually, all intermediate values in the
Python code are explicitly assigned to a variable in the IR. Numba enforces
that each IR variable to have one type only. A user variable (from the Python
source code) can be mapped to multiple variables in the IR. They are *versions*
of a variable. Each time a user variable is assigned to, a new version is
created. From that point, all subsequent references will use the new version.
The user variable *evolves* as the function logic updates its type. Merge
points (e.g. subsequent block to an if-else, the loop body, etc..) in the control
flow need extra care. At each merge point, a new version is implicitly created
to merge the different variable versions from the incoming paths.
The merging of the variable versions may translate into an implicit cast.
Numba uses function overloading to emulate Python duck-typing. The type of a
function can contain multiple call signatures that accept different argument
types and yield different return types. The process to decide the best
signature for an overloaded function is called *overload resolution*.
Numba partially implements the C++ overload resolution scheme
(`ISOCPP`_ 13.3 Overload Resolution). The scheme uses a "best fit" algorithm by
ranking each argument symmetrically. The five possible rankings in increasing
order of penalty are:
* *Exact*: the expected type is the same as the actual type.
* *Promotion*: the actual type can be upcast to the expected type by extending
the precision without changing the behavior.
* *Safe conversion*: the actual type can be cast to the expected type by changing
the type without losing information.
* *Unsafe conversion*: the actual type can be cast to the expected type by
changing the type or downcasting the type even if it is imprecise.
* *No match*: no valid operation can convert the actual type to the expected type.
It is possible to have an ambiguous resolution. For example, a function with
signatures ``(int16, int32)`` and ``(int32, int16)`` can become ambiguous if
presented with the argument types ``(int32, int32)``, because demoting either
argument to ``int16`` is equally "fit". Fortunately, numba can usually resolve
such ambiguity by compiling a new version with the exact signature
``(int32, int32)``. When compilation is disabled and there are multiple
signatures with equal fit, an exception is raised.
Type Inference
==============
The type inference in numba has three important components---type
variable, constraint network, and typing context.
* The *typing context* provides all the type information and typing related
operations, including the logic for type unification, and the logic for typing
of global and constant values. It defines the semantic of the language that
can be compiled by numba.
* A *type variable* holds the type of each variable (in the Numba IR).
Conceptually, it is initialized to the universal type and, as it is re-assigned,
it stores a common type by unifying the new type with the existing type. The
common type must be able to represent values of the new type and the existing
type. Type conversion is applied as necessary and precision loss is
accepted for usability reason.
* The *constraint network* is a dependency graph built from the IR. Each
node represents an operation in the Numba IR and updates at least one type
variable. There may be cycles due to loops in user code.
The type inference process starts by seeding the argument types. These initial
types are propagated in the constraint network, which eventually fills all the
type variables. Due to cycles in the network, the process repeats until all
type variables converge or it fails with undecidable types.
Type unification always returns a more "general" (quoted because unsafe conversion
is allowed) type. Types will converge to the least "general" type that
can represent all possible values that the variable can hold. Since unification
will never move down the type hierarchy and there is a single top type, the
universal type---``object``, the type inference is guaranteed to converge.
A failure in type inference can be caused by two reasons. The first reason is user
error due to incorrect use of a type. This type of error will also trigger an
exception in regular python execution. The second reason is due to the use of an
unsupported feature, but the code is otherwise valid in regular python
execution. Upon an error, the type inference will set all types to the object
type. As a result, numba will fallback to *object-mode*.
Since functions can be overloaded, the type inference needs to decide the
type signature used at each call site. The overload resolution is applied to
all known overload versions of the callee function described in *call-templates*.
A call-template can either be concrete or abstract. A concrete call-template
defines a fixed list of all possible signatures. An abstract call-template
defines the logic to compute the accepted signature and it is used to implement
generic functions.
Numba-compiled functions are generic functions due to their ability to compile
new versions. When it sees a new set of argument types, it triggers type
inference to validate and determine the return type. When there are nested calls
for numba-compiled functions, each call-site triggers type inference.
This poses a problem to recursive functions because the type inference will also
be triggered recursively. Currently, simple single recursion is supported if
the signature is user-annotated by the user, which avoids unbound recursion in
type inference that will never terminate.
.. _ISOCPP: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf
\ No newline at end of file
========================
NBEP 6: Typing Recursion
========================
:Author: Siu Kwan Lam
:Date: Sept 2016
:Status: Draft
Introduction
============
This document proposes an enhancement to the type inference algorithm to
support recursion without explicitly annotating the function signature.
As a result, the proposal enables numba to type-infer both self-recursive and
mutual-recursive functions under some limitations. In practice, these
limitations can be easily overcome by specifying a compilation order.
The Current State
=================
Recursion support in numba is currently limited to self-recursion with explicit
type annotation for the function. This limitation comes from the inability to
determine the return type of a recursive call. This is because the callee is
either the current function (for self-recursion) or a parent function
(mutual-recursion) and its type inference process has been suspended while waiting for
the function-type of its callee. This results in the formation of a cyclic
dependency. For example, given a function ``foo()`` that calls ``bar()``,
which in turns call ``foo()``::
def foo(x):
if x > 0:
return bar(x)
else:
return 1
def bar(x):
return foo(x - 1)
The type inference process of ``foo()`` depends on that of ``bar()``,
which depends on ``foo()``. Therefore ``foo()`` depends on itself and the type
inference algorithm cannot terminate.
The Solution
============
The proposed solution has two components:
1. The introduction of a compile-time *callstack* that tracks the compiling functions.
2. The allowance of a partial type inference on functions by leveraging the return type
on non-recursive control-flow paths.
The compile-time callstack stores typing information of the functions being
compiled. Like an ordinary callstack, it pushes a new record every time a
function is "called". Since this occurs at compile-time, a "call" triggers
a compilation of the callee.
To detect recursion, the compile-time callstack is searched bottom-up
(stack grows downward) for a record that matches the callee.
As the record contains a reference to the type inference state,
the type inference process can be resumed to determine the return type.
Recall that the type inference process cannot be resumed normally because of the cyclic
dependency of the return type. In practice, we can assume that a useful
program must have a terminating condition, a path that does not recurse. So,
the type inference process can make an initial guess for the return-type at the recursive
call by using the return-type determined by the non-recursive paths. This
allows type information to propagate on the recursive paths to generate the
final return type, which is used to refine the type information by the
subsequent iteration in the type inference process.
The following figure illustrates the compile-time callstack when the compiler
reaches the recursive call to ``foo()`` from ``bar()``:
.. image:: recursion_callstack.svg
:width: 400px
At this time, the type inference process of ``foo()`` is suspended and that of ``bar()``
is active. The compiler can see that the callee is already compiling by
searching the callstack. Knowing that it is a recursive call, the compiler
can resume the type-inference on ``foo()`` by ignoring the paths that contain
recursive calls. This means only the ``else`` branch is considered and we can
easily tell that ``foo()`` returns an ``int`` in this case. The compiler will
then set the initial return type of ``foo()`` and ``bar()`` to ``int``. The
subsequent type propagation can use this information to complete the type
inference of both functions, unifying the return-type of all returning paths.
Limitations
===========
For the proposed type inference algorithm to terminate, it assumes that
at least one of the control path leads to a return-statement without undertaking
a recursive call. Should this not be the case, the algorithm will raise an
exception indicating a potential runaway recursion.
For example::
@jit
def first(x):
# The recursing call must have a path that is non-recursing.
if x > 0:
return second(x)
else:
return 1
@jit
def second(x):
return third(x)
@jit
def third(x):
return first(x - 1)
The ``first()`` function must be the compiled first for the type inference algorithm to
complete successfully. Compiling any other function first will lead to a failure
in type inference. The type inference algorithm will treat it as a runaway
recursion due to the lack of a non-recursive exit in the recursive callee.
For example, compiling ``second()`` first will move the recursive call to
``first()``. When the compiler tries to resume the type inference process of
``second()``, it will fail to find a non-recursive path.
This is a small limitation and can be overcome easily by code restructuring or
precompiling in a specific order.
.. _aot-compilation:
Ahead-of-Time compilation
=========================
.. note:: This module is pending deprecation. Please see
:ref:`deprecation-numba-pycc` for more information.
.. currentmodule:: numba.pycc
.. class:: CC(extension_name, source_module=None)
An object used to generate compiled extensions from Numba-compiled
Python functions. *extension_name* is the name of the extension
to be generated. *source_module* is the Python module
containing the functions; if ``None``, it is inferred by examining
the call stack.
:class:`CC` instances have the following attributes and methods:
.. attribute:: name
(read-only attribute) The name of the extension module to be generated.
.. attribute:: output_dir
(read-write attribute) The directory the extension module will be
written into. By default it is the directory the *source_module* is
located in.
.. attribute:: output_file
(read-write attribute) The name of the file the extension module will
be written to. By default this follows the Python naming convention
for the current platform.
.. attribute:: target_cpu
(read-write attribute) The name of the CPU model to generate code for.
This will select the appropriate instruction set extensions. By
default, a generic CPU is selected in order to produce portable code.
Recognized names for this attribute depend on the current architecture
and LLVM version. If you have LLVM installed, ``llc -mcpu=help``
will give you a list. Examples on x86-64 are ``"ivybridge"``,
``"haswell"``, ``"skylake"`` or ``"broadwell"``. You can also give
the value ``"host"`` which will select the current host CPU.
.. attribute:: verbose
(read-write attribute) If true, print out information while
compiling the extension. False by default.
.. decorator:: export(exported_name, sig)
Mark the decorated function for compilation with the signature *sig*.
The compiled function will be exposed as *exported_name* in the
generated extension module.
All exported names within a given :class:`CC` instance must be
distinct, otherwise an exception is raised.
.. method:: compile()
Compile all exported functions and generate the extension module
as specified by :attr:`output_dir` and :attr:`output_file`.
.. method:: distutils_extension(**kwargs)
Return a :py:class:`distutils.core.Extension` instance allowing
to integrate generation of the extension module in a conventional
``setup.py``-driven build process. The optional *kwargs* let you
pass optional parameters to the :py:class:`~distutils.core.Extension`
constructor.
In this mode of operation, it is not necessary to call :meth:`compile`
yourself. Also, :attr:`output_dir` and :attr:`output_file` will be
ignored.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment