Commit da2c1226 authored by lucasb-eyer's avatar lucasb-eyer
Browse files

Initial import of code.

parent 3ae01ffb
# pydensecrf
Python wrapper to Philipp Krähenbühl's dense (fully connected) CRFs with gaussian edge potentials.
PyDenseCRF
==========
This is a (Cython-based) Python wrapper for Philipp Krähenbühl's Fully-Connected CRFs (version 2).
If you use this code for your reasearch, please cite:
```
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials
Philipp Krähenbühl and Vladlen Koltun
NIPS 2011
```
and provide a link to this repository as a footnote or a citation.
Installation
============
You can install this using `pip` by executing:
```
TODO
```
Usage
=====
For images, the easiest way to use this library is using the `DenseCRF2D` class:
```
import numpy as np
import densecrf as dcrf
d = dcrf.DenseCRF2D(640, 480, 3) # width, height, nlabels
```
Unary potential
---------------
You can then set a fixed unary potential in the following way:
```
U = np.array(...) # Get the unary in some way.
print(U.shape) # -> (640, 480, 3)
print(U.dtype) # -> dtype('float32')
U = U.reshape((-1,3)) # Needs to be flat.
d.setUnaryEnergy(U)
# Or alternatively: d.setUnary(ConstUnary(U))
```
Remember that `U` should be negative log-probabilities, so if you're using
probabilities `py`, don't forget to `U = -np.log(py)` them.
Requiring the `reshape` on the unary is an API wart that I'd like to fix, but
don't know how to without introducing an explicit dependency on numpy.
Pairwise potentials
-------------------
The two-dimensional case has two utility methods for adding the most-common pairwise potentials:
```
# This adds the color-independent term, features are the locations only.
d.addPairwiseGaussian(sxy=(3,3), compat=3, kernel=dcrf.DIAG_KERNEL, normalization=dcrf.NORMALIZE_SYMMETRIC)
# This adds the color-dependent term, i.e. features are (x,y,r,g,b).
# im is an image-array, e.g. im.dtype == np.uint8 and im.shape == (640,480,3)
d.addPairwiseBilateral(sxy=(80,80), srgb=(13,13,13), rgbim=im, compat=10, kernel=dcrf.DIAG_KERNEL, normalization=dcrf.NORMALIZE_SYMMETRIC)
```
Both of these methods have shortcuts and default-arguments such that the most
common use-case can be simplified to:
```
d.addPairwiseGaussian(sxy=3, compat=3)
d.addPairwiseBilateral(sxy=80, srgb=13, rgbim=im, compat=10)
```
### Compatibilities
The `compatibility` argument can be any of the following:
- A number, then a `PottsCompatibility` is being used.
- A 1D array, then a `DiagonalCompatibility` is being used.
- A 2D array, then a `MatrixCompatibility` is being used.
### Kernels
Possible values for the `kernel` argument are:
- `CONST_KERNEL`
- `DIAG_KERNEL` (the default)
- `FULL_KERNEL`
### Normalizations
Possible values for the `normalization` argument are:
- `NO_NORMALIZATION`
- `NORMALIZE_BEFORE`
- `NORMALIZE_AFTER`
- `NORMALIZE_SYMMETRIC` (the default)
Inference
---------
The easiest way to do inference is to simply call:
```
Q = d.inference(n_iterations=5)
```
And the MAP prediction is then:
```
map = np.argmax(Q, axis=0).reshape((640,480))
```
Step-by-step inference
----------------------
If for some reason you want to run the inference loop manually, you can do so:
```
Q, tmp1, tmp2 = d.startInference()
for i in range(5):
print("KL-divergence at {}: {}".format(i, d.klDivergence(Q)))
d.stepInference(Q, tmp1, tmp2)
```
Generic non-2D
--------------
The `DenseCRF` class can be used for generic (non-2D) dense CRFs.
Its usage is exactly the same as above, except that the 2D-specific pairwise
potentials `addPairwiseGaussian` and `addPairwiseBilateral` are missing.
Instead, you need to use the generic `addPairwiseEnergy` method like this:
```
d = dcrf.DenseCRF(100, 3) # npoints, nlabels
feats = np.array(...) # Get the pairwise features from somewhere.
print(feats.shape) # -> (100, 3)
print(feats.dtype) # -> dtype('float32')
dcrf.addPairwiseEnergy(feats)
```
In addition, you can pass `compatibility`, `kernel` and `normalization`
arguments just like in the 2D gaussian and bilateral cases.
The potential will be computed as `w*exp(-0.5 * |f_i - f_j|^2)`.
Learning
--------
The learning has not been fully wrapped. If you need it, get in touch or better
yet, wrap it and submit a pull-request!
from eigen cimport *
cdef extern from "densecrf/include/labelcompatibility.h":
cdef cppclass LabelCompatibility:
pass
cdef cppclass PottsCompatibility(LabelCompatibility):
PottsCompatibility(float) except +
cdef cppclass DiagonalCompatibility(LabelCompatibility):
DiagonalCompatibility(const c_VectorXf&) except +
cdef cppclass MatrixCompatibility(LabelCompatibility):
MatrixCompatibility(const c_MatrixXf&) except +
cdef extern from "densecrf/include/unary.h":
cdef cppclass UnaryEnergy:
pass
cdef cppclass ConstUnaryEnergy(UnaryEnergy):
ConstUnaryEnergy(const c_MatrixXf& unary) except +
cdef cppclass LogisticUnaryEnergy(UnaryEnergy):
LogisticUnaryEnergy(const c_MatrixXf& L, const c_MatrixXf& feature) except +
cdef class Unary:
cdef UnaryEnergy *thisptr
cdef UnaryEnergy* move(self)
cdef class ConstUnary(Unary):
pass
cdef class LogisticUnary(Unary):
pass
cdef extern from "densecrf/include/pairwise.h":
cpdef enum NormalizationType: NO_NORMALIZATION, NORMALIZE_BEFORE, NORMALIZE_AFTER, NORMALIZE_SYMMETRIC
cpdef enum KernelType: CONST_KERNEL, DIAG_KERNEL, FULL_KERNEL
cdef extern from "densecrf/include/densecrf.h":
cdef cppclass c_DenseCRF "DenseCRF":
c_DenseCRF(int N, int M) except +
# Setup methods.
# TODO
#void addPairwiseEnergy(PairwisePotential *potential)
void addPairwiseEnergy(const c_MatrixXf &features, LabelCompatibility*, KernelType, NormalizationType)
void setUnaryEnergy(UnaryEnergy *unary)
void setUnaryEnergy(const c_MatrixXf &unary)
void setUnaryEnergy(const c_MatrixXf &L, const c_MatrixXf &feature)
# Inference methods.
c_MatrixXf inference(int n_iterations)
# TODO: Not enabled because it would require wrapping VectorXs (note the `s`)
#c_VectorXs map(int n_iterations)
# Step-by-step inference methods.
c_MatrixXf startInference() const
void stepInference(c_MatrixXf &Q, c_MatrixXf &tmp1, c_MatrixXf &tmp2) const
#double gradient( int n_iterations, const ObjectiveFunction & objective, c_VectorXf * unary_grad, c_VectorXf * lbl_cmp_grad, c_VectorXf * kernel_grad=NULL ) const;
double klDivergence(const c_MatrixXf &Q) const
#c_VectorXf unaryParameters() const;
#void setUnaryParameters( const c_VectorXf & v );
#c_VectorXf labelCompatibilityParameters() const;
#void setLabelCompatibilityParameters( const c_VectorXf & v );
#c_VectorXf kernelParameters() const;
#void setKernelParameters( const c_VectorXf & v );
cdef extern from "densecrf/include/densecrf.h":
cdef cppclass c_DenseCRF2D "DenseCRF2D" (c_DenseCRF):
c_DenseCRF2D(int W, int H, int M) except +
void addPairwiseGaussian(float sx, float sy, LabelCompatibility*, KernelType, NormalizationType)
void addPairwiseBilateral(float sx, float sy, float sr, float sg, float sb, const unsigned char *rgbim, LabelCompatibility*, KernelType, NormalizationType)
cdef class DenseCRF:
cdef c_DenseCRF *_this
cdef class DenseCRF2D(DenseCRF):
cdef c_DenseCRF2D *_this2d
# distutils: language = c++
# distutils: sources = densecrf/src/densecrf.cpp densecrf/src/unary.cpp densecrf/src/pairwise.cpp densecrf/src/permutohedral.cpp densecrf/src/optimization.cpp densecrf/src/objective.cpp densecrf/src/labelcompatibility.cpp densecrf/src/util.cpp densecrf/external/liblbfgs/lib/lbfgs.c
# distutils: include_dirs = densecrf/include densecrf/external/liblbfgs/include
from numbers import Number
import eigen
cimport eigen
cdef LabelCompatibility* _labelcomp(compat):
if isinstance(compat, Number):
return new PottsCompatibility(compat)
elif memoryview(compat).ndim == 1:
return new DiagonalCompatibility(eigen.c_vectorXf(compat))
elif memoryview(compat).ndim == 2:
return new MatrixCompatibility(eigen.c_matrixXf(compat))
else:
raise ValueError("LabelCompatibility of dimension >2 not meaningful.")
cdef class Unary:
# Because all of the APIs that take an object of this type will
# take ownership. Thus, we need to make sure not to delete this
# upon destruction.
cdef UnaryEnergy* move(self):
ptr = self.thisptr
self.thisptr = NULL
return ptr
# It might already be deleted by the library, actually.
# Yeah, pretty sure it is.
def __dealloc__(self):
del self.thisptr
cdef class ConstUnary(Unary):
def __cinit__(self, float[:,::1] u not None):
self.thisptr = new ConstUnaryEnergy(eigen.c_matrixXf(u))
cdef class LogisticUnary(Unary):
def __cinit__(self, float[:,::1] L not None, float[:,::1] f not None):
self.thisptr = new LogisticUnaryEnergy(eigen.c_matrixXf(L), eigen.c_matrixXf(f))
cdef class DenseCRF:
def __cinit__(self, int nvar, int nlabels, *_, **__):
# We need to swallow extra-arguments because superclass cinit function
# will always be called with the same params as the subclass, automatically.
# We also only want to avoid creating an object if we're just being called
# from a subclass as part of the hierarchy.
if type(self) is DenseCRF:
self._this = new c_DenseCRF(nvar, nlabels)
else:
self._this = NULL
def __dealloc__(self):
# Because destructors are virtual, this is enough to delete any object
# of child classes too.
if self._this:
del self._this
def addPairwiseEnergy(self, float[:,::1] features not None, compat, KernelType kernel=DIAG_KERNEL, NormalizationType normalization=NORMALIZE_SYMMETRIC):
self._this.addPairwiseEnergy(eigen.c_matrixXf(features), _labelcomp(compat), kernel, normalization)
def setUnary(self, Unary u):
self._this.setUnaryEnergy(u.move())
def setUnaryEnergy(self, float[:,::1] u not None, float[:,::1] f = None):
if f is None:
self._this.setUnaryEnergy(eigen.c_matrixXf(u))
else:
self._this.setUnaryEnergy(eigen.c_matrixXf(u), eigen.c_matrixXf(f))
def inference(self, int niter):
return eigen.MatrixXf().wrap(self._this.inference(niter))
def startInference(self):
return eigen.MatrixXf().wrap(self._this.startInference()), eigen.MatrixXf(), eigen.MatrixXf()
def stepInference(self, MatrixXf Q, MatrixXf tmp1, MatrixXf tmp2):
self._this.stepInference(Q.m, tmp1.m, tmp2.m)
def klDivergence(self, MatrixXf Q):
return self._this.klDivergence(Q.m)
cdef class DenseCRF2D(DenseCRF):
# The same comments as in the superclass' `__cinit__` apply here.
def __cinit__(self, int w, int h, int nlabels, *_, **__):
if type(self) is DenseCRF2D:
self._this = self._this2d = new c_DenseCRF2D(w, h, nlabels)
def addPairwiseGaussian(self, sxy, compat, KernelType kernel=DIAG_KERNEL, NormalizationType normalization=NORMALIZE_SYMMETRIC):
if isinstance(sxy, Number):
sxy = (sxy, sxy)
self._this2d.addPairwiseGaussian(sxy[0], sxy[1], _labelcomp(compat), kernel, normalization)
def addPairwiseBilateral(self, sxy, srgb, unsigned char[:,:,::1] rgbim not None, compat, KernelType kernel=DIAG_KERNEL, NormalizationType normalization=NORMALIZE_SYMMETRIC):
if isinstance(sxy, Number):
sxy = (sxy, sxy)
if isinstance(srgb, Number):
srgb = (srgb, srgb, srgb)
self._this2d.addPairwiseBilateral(
sxy[0], sxy[1], srgb[0], srgb[1], srgb[2], &rgbim[0,0,0], _labelcomp(compat), kernel, normalization
)
DenseCRF - Code
=============
http://graphics.stanford.edu/projects/drf/
This software pertains to the research described in the ICML 2013 paper:
Parameter Learning and Convergent Inference for Dense Random Fields, by
Philipp Krähenbühl and Vladlen Koltun
and the NIPS 2011 paper:
Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials, by
Philipp Krähenbühl and Vladlen Koltun
If you're using this code in a publication, please cite our papers.
This software is provided for research purposes, with absolutely no warranty
or suggested support, and use of it most follow the BSD license agreement, at
the top of each source file. *Please do not contact the authors for assistance
with installing, understanding or running the code.* However if you think you
have found an interesting bug, the authors would be grateful if you could pass
on the information.
Changes to the original code
----------------------------
The only major difference in this released version of the code is, that I directly
compute the gradient of the permutohedral lattice, instead of the general Gauss
Transform (3 line formula in p.6 in ICML 2013 paper). The gradient of the
permutohedral lattice evaluated the exact gradient of the approximate filter.
In higher dimensions (>3) the filter can be non continuous, which can complicate
the optimization. The kernel gradient is also scaled lower than other parameters,
which complicates the optimization.
How to compile the code
-----------------------
Dependencies:
* cmake http://www.cmake.org/
* Eigen (included)
* liblbfgs (included)
Linux, Mac OS X and Windows (cygwin):
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=Release ..
make
cd ..
Windows
You're probably better off just copying all files into a Visual Studio
project
How to run the example
----------------------
An example on how to use the DenseCRF can be found in
examples/dense_inference.cpp. The example loads an image and some annotations.
It then uses a very simple classifier to compute a unary term based on those
annotations. A dense CRF with both color dependent and color independent terms
find the final accurate labeling.
Linux, Mac OS X and Windows (cygwin):
build/examples/dense_inference input_image.ppm annotations.ppm output.ppm
For example:
build/examples/dense_inference examples/im1.ppm examples/anno1.ppm output1.ppm
An example on how to unse the learning code can be found in
examples/dense_learning.cpp. The example loads a color image and ground truth
annotation. It then learn a CRF model with a logistic regression, a label comp
and Gaussian kernel.
Linux, Mac OS X and Windows (cygwin):
build/examples/dense_learning input_image.ppm annotations.ppm output.ppm
For example:
build/examples/dense_learning examples/im1.ppm examples/anno1.ppm output1.ppm
Please note that this implementation is slightly slower than the one used to
in our NIPS 2011 paper. Mainly because I tried to keep the code clean and easy
to understand.
include_directories( liblbfgs/include )
add_library( lbfgs liblbfgs/lib/lbfgs.c )
Naoaki Okazaki <okazaki at chokkan org>
The MIT License
Copyright (c) 1990 Jorge Nocedal
Copyright (c) 2007-2010 Naoaki Okazaki
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
2010-xx-xx Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.10:
- Fixed compiling errors on Mac OS X; this patch was kindly submitted by Nic Schraudolph.
- Reduced compiling warnings on Mac OS X; this patch was kindly submitted by Tamas Nepusz.
2010-01-29 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.9:
- Fixed a mistake in checking the validity of the parameters "ftol" and "wolfe"; this mistake was discovered by Kevin S. Van Horn.
2009-07-13 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.8:
- Accepted the patch submitted by Takashi Imamichi; the backtracking method now has three criteria for choosing the step length.
- Updated the documentation to explain the above three criteria.
2009-02-28 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.7:
- Improved OWL-QN routines for stability.
- Removed the support of OWL-QN method in MoreThuente algorithm
because it accidentally fails in early stages of iterations for some
objectives. Because of this change, the OW-LQN method must be used
with the backtracking algorithm (LBFGS_LINESEARCH_BACKTRACKING), or
the library returns LBFGSERR_INVALID_LINESEARCH.
- Renamed line search algorithms as follows:
- LBFGS_LINESEARCH_BACKTRACKING: regular Wolfe condition.
- LBFGS_LINESEARCH_BACKTRACKING_LOOSE: regular Wolfe condition.
- LBFGS_LINESEARCH_BACKTRACKING_STRONG: strong Wolfe condition.
- Source code clean-up.
2008-11-02 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.6:
- Improved line-search algorithm with strong Wolfe condition, which
was contributed by Takashi Imamichi. This routine is now default for
LBFGS_LINESEARCH_BACKTRACKING. The previous line search algorithm
with regular Wolfe condition is still available as
LBFGS_LINESEARCH_BACKTRACKING_LOOSE.
- Configurable stop index for L1-norm computation. A member variable
lbfgs_parameter_t::orthantwise_end was added to specify the index
number at which the library stops computing the L1 norm of the
variables. This is useful to prevent some variables from being
regularized by the OW-LQN method.
- A sample program written in C++ (sample/sample.cpp).
2008-07-10 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.5:
- Configurable starting index for L1-norm computation. A member
variable lbfgs_parameter_t::orthantwise_start was added to specify
the index number from which the library computes the L1 norm of the
variables.
- Fixed a zero-division error when the initial variables have already
been a minimizer (reported by Takashi Imamichi). In this case, the
library returns LBFGS_ALREADY_MINIMIZED status code.
- Defined LBFGS_SUCCESS status code as zero; removed unused constants,
LBFGSFALSE and LBFGSTRUE.
- Fixed a compile error in an implicit down-cast.
2008-04-25 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.4:
- Configurable line search algorithms. A member variable
lbfgs_parameter_t::linesearch was added to choose either MoreThuente
method (LBFGS_LINESEARCH_MORETHUENTE) or backtracking algorithm
(LBFGS_LINESEARCH_BACKTRACKING).
- Fixed a bug: the previous version did not compute psuedo-gradients
properly in the line search routines for OW-LQN. This bug might quit
an iteration process too early when the OW-LQN routine was activated
(0 < lbfgs_parameter_t::orthantwise_c).
- Configure script for POSIX environments.
- SSE/SSE2 optimizations with GCC.
- New functions lbfgs_malloc and lbfgs_free to use SSE/SSE2 routines
transparently. It is uncessary to use these functions for libLBFGS
built without SSE/SSE2 routines; you can still use any memory
allocators if SSE/SSE2 routines are disabled in libLBFGS.
2007-12-16 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.3:
- An API change. An argument was added to lbfgs() function to receive
the final value of the objective function. This argument can be set
to NULL if the final value is unnecessary.
- Fixed a null-pointer bug in the sample code (reported by Takashi
Imamichi).
- Added build scripts for Microsoft Visual Studio 2005 and GCC.
- Added README file.
2007-12-13 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.2:
- Fixed a serious bug in orthant-wise L-BFGS. An important variable
was used without initialization.
- Configurable L-BFGS parameters (number of limited memories, epsilon).
2007-12-01 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.1:
- Implemented orthant-wise L-BFGS.
- Implemented lbfgs_parameter_init() function.
- Fixed several bugs.
- API documentation.
2007-09-20 Naoaki Okazaki <okazaki at chokkan org>
* libLBFGS 1.0
- Initial release.
Installation Instructions
*************************
Copyright (C) 1994, 1995, 1996, 1999, 2000, 2001, 2002, 2004 Free
Software Foundation, Inc.
This file is free documentation; the Free Software Foundation gives
unlimited permission to copy, distribute and modify it.
Basic Installation
==================
These are generic installation instructions.
The `configure' shell script attempts to guess correct values for
various system-dependent variables used during compilation. It uses
those values to create a `Makefile' in each directory of the package.
It may also create one or more `.h' files containing system-dependent
definitions. Finally, it creates a shell script `config.status' that
you can run in the future to recreate the current configuration, and a
file `config.log' containing compiler output (useful mainly for
debugging `configure').
It can also use an optional file (typically called `config.cache'
and enabled with `--cache-file=config.cache' or simply `-C') that saves
the results of its tests to speed up reconfiguring. (Caching is
disabled by default to prevent problems with accidental use of stale
cache files.)
If you need to do unusual things to compile the package, please try
to figure out how `configure' could check whether to do them, and mail
diffs or instructions to the address given in the `README' so they can
be considered for the next release. If you are using the cache, and at
some point `config.cache' contains results you don't want to keep, you
may remove or edit it.
The file `configure.ac' (or `configure.in') is used to create
`configure' by a program called `autoconf'. You only need
`configure.ac' if you want to change it or regenerate `configure' using
a newer version of `autoconf'.
The simplest way to compile this package is:
1. `cd' to the directory containing the package's source code and type
`./configure' to configure the package for your system. If you're
using `csh' on an old version of System V, you might need to type
`sh ./configure' instead to prevent `csh' from trying to execute
`configure' itself.
Running `configure' takes awhile. While running, it prints some
messages telling which features it is checking for.
2. Type `make' to compile the package.
3. Optionally, type `make check' to run any self-tests that come with
the package.
4. Type `make install' to install the programs and any data files and
documentation.
5. You can remove the program binaries and object files from the
source code directory by typing `make clean'. To also remove the
files that `configure' created (so you can compile the package for
a different kind of computer), type `make distclean'. There is
also a `make maintainer-clean' target, but that is intended mainly
for the package's developers. If you use it, you may have to get
all sorts of other programs in order to regenerate files that came
with the distribution.
Compilers and Options
=====================
Some systems require unusual options for compilation or linking that the
`configure' script does not know about. Run `./configure --help' for
details on some of the pertinent environment variables.
You can give `configure' initial values for configuration parameters
by setting variables in the command line or in the environment. Here
is an example:
./configure CC=c89 CFLAGS=-O2 LIBS=-lposix
*Note Defining Variables::, for more details.
Compiling For Multiple Architectures
====================================
You can compile the package for more than one kind of computer at the
same time, by placing the object files for each architecture in their
own directory. To do this, you must use a version of `make' that
supports the `VPATH' variable, such as GNU `make'. `cd' to the
directory where you want the object files and executables to go and run
the `configure' script. `configure' automatically checks for the
source code in the directory that `configure' is in and in `..'.
If you have to use a `make' that does not support the `VPATH'
variable, you have to compile the package for one architecture at a
time in the source code directory. After you have installed the
package for one architecture, use `make distclean' before reconfiguring
for another architecture.
Installation Names
==================
By default, `make install' will install the package's files in
`/usr/local/bin', `/usr/local/man', etc. You can specify an
installation prefix other than `/usr/local' by giving `configure' the
option `--prefix=PREFIX'.
You can specify separate installation prefixes for
architecture-specific files and architecture-independent files. If you
give `configure' the option `--exec-prefix=PREFIX', the package will
use PREFIX as the prefix for installing programs and libraries.
Documentation and other data files will still use the regular prefix.
In addition, if you use an unusual directory layout you can give
options like `--bindir=DIR' to specify different values for particular
kinds of files. Run `configure --help' for a list of the directories
you can set and what kinds of files go in them.
If the package supports it, you can cause programs to be installed
with an extra prefix or suffix on their names by giving `configure' the
option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
Optional Features
=================
Some packages pay attention to `--enable-FEATURE' options to
`configure', where FEATURE indicates an optional part of the package.
They may also pay attention to `--with-PACKAGE' options, where PACKAGE
is something like `gnu-as' or `x' (for the X Window System). The
`README' should mention any `--enable-' and `--with-' options that the
package recognizes.
For packages that use the X Window System, `configure' can usually
find the X include and library files automatically, but if it doesn't,
you can use the `configure' options `--x-includes=DIR' and
`--x-libraries=DIR' to specify their locations.
Specifying the System Type
==========================
There may be some features `configure' cannot figure out automatically,
but needs to determine by the type of machine the package will run on.
Usually, assuming the package is built to be run on the _same_
architectures, `configure' can figure that out, but if it prints a
message saying it cannot guess the machine type, give it the
`--build=TYPE' option. TYPE can either be a short name for the system
type, such as `sun4', or a canonical name which has the form:
CPU-COMPANY-SYSTEM
where SYSTEM can have one of these forms:
OS KERNEL-OS
See the file `config.sub' for the possible values of each field. If
`config.sub' isn't included in this package, then this package doesn't
need to know the machine type.
If you are _building_ compiler tools for cross-compiling, you should
use the `--target=TYPE' option to select the type of system they will
produce code for.
If you want to _use_ a cross compiler, that generates code for a
platform different from the build platform, you should specify the
"host" platform (i.e., that on which the generated programs will
eventually be run) with `--host=TYPE'.
Sharing Defaults
================
If you want to set default values for `configure' scripts to share, you
can create a site shell script called `config.site' that gives default
values for variables like `CC', `cache_file', and `prefix'.
`configure' looks for `PREFIX/share/config.site' if it exists, then
`PREFIX/etc/config.site' if it exists. Or, you can set the
`CONFIG_SITE' environment variable to the location of the site script.
A warning: not all `configure' scripts look for a site script.
Defining Variables
==================
Variables not defined in a site shell script can be set in the
environment passed to `configure'. However, some packages may run
configure again during the build, and the customized values of these
variables may be lost. In order to avoid this problem, you should set
them in the `configure' command line, using `VAR=value'. For example:
./configure CC=/usr/local2/bin/gcc
will cause the specified gcc to be used as the C compiler (unless it is
overridden in the site shell script).
`configure' Invocation
======================
`configure' recognizes the following options to control how it operates.
`--help'
`-h'
Print a summary of the options to `configure', and exit.
`--version'
`-V'
Print the version of Autoconf used to generate the `configure'
script, and exit.
`--cache-file=FILE'
Enable the cache: use and save the results of the tests in FILE,
traditionally `config.cache'. FILE defaults to `/dev/null' to
disable caching.
`--config-cache'
`-C'
Alias for `--cache-file=config.cache'.
`--quiet'
`--silent'
`-q'
Do not print messages saying which checks are being made. To
suppress all normal output, redirect it to `/dev/null' (any error
messages will still be shown).
`--srcdir=DIR'
Look for the package's source code in directory DIR. Usually
`configure' can determine that directory automatically.
`configure' also accepts some other, not widely useful, options. Run
`configure --help' for more details.
# $Id$
SUBDIRS = lib sample
docdir = $(prefix)/share/doc/@PACKAGE@
doc_DATA = README INSTALL COPYING AUTHORS ChangeLog NEWS
EXTRA_DIST = \
autogen.sh \
lbfgs.sln
This diff is collapsed.
libLBFGS: C library of limited-memory BFGS (L-BFGS)
Copyright (c) 1990, Jorge Nocedal
Copyright (c) 2007-2010, Naoaki Okazaki
=========================================================================
1. Introduction
=========================================================================
libLBFGS is a C port of the implementation of Limited-memory
Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method written by Jorge Nocedal.
The original FORTRAN source code is available at:
http://www.ece.northwestern.edu/~nocedal/lbfgs.html
The L-BFGS method solves the unconstrainted minimization problem:
minimize F(x), x = (x1, x2, ..., xN),
only if the objective function F(x) and its gradient G(x) are computable.
Refer to the libLBFGS web site for more information.
http://www.chokkan.org/software/liblbfgs/
=========================================================================
2. How to build
=========================================================================
[Microsoft Visual Studio 2008]
Open the solution file "lbfgs.sln" and build it.
[GCC]
$ ./configure
$ make
$ make install # To install libLBFGS library and header.
=========================================================================
3. Note on SSE/SSE2 optimization
=========================================================================
This library has SSE/SSE2 optimization routines for vector arithmetic
operations on Intel/AMD processors. The SSE2 routine is for 64 bit double
values, and the SSE routine is for 32 bit float values. Since the default
parameters in libLBFGS are tuned for double precision values, it may need
to modify these parameters to use the SSE optimization routines.
To use the SSE2 optimization routine, specify --enable-sse2 option to the
configure script.
$ ./configure --enable-sse2
To build libLBFGS with SSE2 optimization enabled on Microsoft Visual
Studio 2005, define USE_SSE and __SSE2__ symbols.
Make sure to run libLBFGS on processors where SSE2 instrunctions are
available. The library does not check the existence of SSE2 instructions.
To package maintainers,
Please do not enable SSE/SSE2 optimization routine. The library built
with SSE/SSE2 optimization will crash without any notice when necessary
SSE/SSE2 instructions are unavailable on CPUs.
=========================================================================
4. License
=========================================================================
libLBFGS is distributed under the term of the MIT license.
Please refer to COPYING file in the distribution.
$Id$
This diff is collapsed.
#!/bin/sh
# $Id$
if [ "$1" = "--force" ];
then
FORCE=--force
NOFORCE=
FORCE_MISSING=--force-missing
else
FORCE=
NOFORCE=--no-force
FORCE_MISSING=
fi
libtoolize --copy $FORCE 2>&1 | sed '/^You should/d' || {
echo "libtoolize failed!"
exit 1
}
aclocal $FORCE || {
echo "aclocal failed!"
exit 1
}
autoheader $FORCE || {
echo "autoheader failed!"
exit 1
}
automake -a -c $NOFORCE || {
echo "automake failed!"
exit 1
}
autoconf $FORCE || {
echo "autoconf failed!"
exit 1
}
This diff is collapsed.
/* config.h.in. Generated from configure.in by autoheader. */
/* Define to 1 if you have the <dlfcn.h> header file. */
#undef HAVE_DLFCN_H
/* Define to 1 if you have the <emmintrin.h> header file. */
#undef HAVE_EMMINTRIN_H
/* Define to 1 if you have the <inttypes.h> header file. */
#undef HAVE_INTTYPES_H
/* Define to 1 if you have the `m' library (-lm). */
#undef HAVE_LIBM
/* Define to 1 if you have the <memory.h> header file. */
#undef HAVE_MEMORY_H
/* Define to 1 if you have the <stdint.h> header file. */
#undef HAVE_STDINT_H
/* Define to 1 if you have the <stdlib.h> header file. */
#undef HAVE_STDLIB_H
/* Define to 1 if you have the <strings.h> header file. */
#undef HAVE_STRINGS_H
/* Define to 1 if you have the <string.h> header file. */
#undef HAVE_STRING_H
/* Define to 1 if you have the <sys/stat.h> header file. */
#undef HAVE_SYS_STAT_H
/* Define to 1 if you have the <sys/types.h> header file. */
#undef HAVE_SYS_TYPES_H
/* Define to 1 if you have the <unistd.h> header file. */
#undef HAVE_UNISTD_H
/* Define to 1 if you have the <xmmintrin.h> header file. */
#undef HAVE_XMMINTRIN_H
/* Name of package */
#undef PACKAGE
/* Define to the address where bug reports for this package should be sent. */
#undef PACKAGE_BUGREPORT
/* Define to the full name of this package. */
#undef PACKAGE_NAME
/* Define to the full name and version of this package. */
#undef PACKAGE_STRING
/* Define to the one symbol short name of this package. */
#undef PACKAGE_TARNAME
/* Define to the version of this package. */
#undef PACKAGE_VERSION
/* Define to 1 if you have the ANSI C header files. */
#undef STDC_HEADERS
/* Version number of package */
#undef VERSION
This diff is collapsed.
This diff is collapsed.
dnl $Id$
dnl
dnl
dnl Exported and configured variables:
dnl CFLAGS
dnl LDFLAGS
dnl INCLUDES
dnl ------------------------------------------------------------------
dnl Initialization for autoconf
dnl ------------------------------------------------------------------
AC_PREREQ(2.59)
AC_INIT
AC_CONFIG_SRCDIR([lib/lbfgs.c])
dnl ------------------------------------------------------------------
dnl Initialization for automake
dnl ------------------------------------------------------------------
AM_INIT_AUTOMAKE(liblbfgs, 1.10)
AC_CONFIG_HEADERS(config.h)
AM_MAINTAINER_MODE
dnl ------------------------------------------------------------------
dnl Checks for program
dnl ------------------------------------------------------------------
AC_PROG_LIBTOOL
AC_PROG_INSTALL
AC_PROG_LN_S
AC_PROG_MAKE_SET
dnl ------------------------------------------------------------------
dnl Initialization for variables
dnl ------------------------------------------------------------------
CFLAGS="${ac_save_CFLAGS} -Wall"
LDFLAGS="${ac_save_LDFLAGS}"
INCLUDES="-I\$(top_srcdir) -I\$(top_srcdir)/include"
dnl ------------------------------------------------------------------
dnl Checks for header files.
dnl ------------------------------------------------------------------
AC_HEADER_STDC
AC_CHECK_HEADERS(xmmintrin.h emmintrin.h)
dnl ------------------------------------------------------------------
dnl Checks for debugging mode
dnl ------------------------------------------------------------------
AC_ARG_ENABLE(
debug,
[AS_HELP_STRING(
[--enable-debug],
[build for debugging]
)],
[CFLAGS="-DDEBUG -O -g ${CFLAGS}"],
[CFLAGS="-O3 -ffast-math ${CFLAGS}"]
)
dnl ------------------------------------------------------------------
dnl Checks for profiling mode
dnl ------------------------------------------------------------------
AC_ARG_ENABLE(
profile,
[AS_HELP_STRING(
[--enable-profile],
[build for profiling]
)],
[CFLAGS="-DPROFILE -pg ${CFLAGS}"]
)
dnl ------------------------------------------------------------------
dnl Checks for SSE2 build
dnl ------------------------------------------------------------------
AC_ARG_ENABLE(
sse2,
[AS_HELP_STRING(
[--enable-sse2],
[enable SSE2 optimization routines]
)],
[CFLAGS="-msse2 -DUSE_SSE ${CFLAGS}"]
)
dnl ------------------------------------------------------------------
dnl Checks for library functions.
dnl ------------------------------------------------------------------
AC_CHECK_LIB(m, fabs)
dnl ------------------------------------------------------------------
dnl Export variables
dnl ------------------------------------------------------------------
AC_SUBST(CFLAGS)
AC_SUBST(LDFLAGS)
AC_SUBST(INCLUDES)
dnl ------------------------------------------------------------------
dnl Output the configure results.
dnl ------------------------------------------------------------------
AC_CONFIG_FILES(Makefile lib/Makefile sample/Makefile)
AC_OUTPUT
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment