Commit 71d585d1 authored by Christopher Shallue's avatar Christopher Shallue
Browse files

Merge remote-tracking branch 'upstream/master'

parents 5c9d38d9 73ae53ac
# Java baseimage, for Bazel. # Java baseimage, for Bazel.
FROM java:8 FROM openjdk:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
...@@ -50,6 +50,8 @@ RUN python -m pip install \ ...@@ -50,6 +50,8 @@ RUN python -m pip install \
&& python -m pip install pygraphviz \ && python -m pip install pygraphviz \
--install-option="--include-path=/usr/include/graphviz" \ --install-option="--include-path=/usr/include/graphviz" \
--install-option="--library-path=/usr/lib/graphviz/" \ --install-option="--library-path=/usr/lib/graphviz/" \
&& python -m jupyter_core.command nbextension enable \
--py --sys-prefix widgetsnbextension \
&& rm -rf /root/.cache/pip /tmp/pip* && rm -rf /root/.cache/pip /tmp/pip*
# Installs the latest version of Bazel. # Installs the latest version of Bazel.
...@@ -86,6 +88,5 @@ EXPOSE 8888 ...@@ -86,6 +88,5 @@ EXPOSE 8888
# This does not need to be compiled, only copied. # This does not need to be compiled, only copied.
COPY examples $SYNTAXNETDIR/syntaxnet/examples COPY examples $SYNTAXNETDIR/syntaxnet/examples
# Todo: Move this earlier in the file (don't want to invalidate caches for now). # Todo: Move this earlier in the file (don't want to invalidate caches for now).
RUN jupyter nbextension enable --py --sys-prefix widgetsnbextension
CMD /bin/bash -c "bazel-bin/dragnn/tools/oss_notebook_launcher notebook --debug --notebook-dir=/opt/tensorflow/syntaxnet/examples" CMD /bin/bash -c "bazel-bin/dragnn/tools/oss_notebook_launcher notebook --debug --notebook-dir=/opt/tensorflow/syntaxnet/examples"
...@@ -20,12 +20,16 @@ This repository is largely divided into two sub-packages: ...@@ -20,12 +20,16 @@ This repository is largely divided into two sub-packages:
1. **DRAGNN: 1. **DRAGNN:
[code](https://github.com/tensorflow/models/tree/master/syntaxnet/dragnn), [code](https://github.com/tensorflow/models/tree/master/syntaxnet/dragnn),
[documentation](g3doc/DRAGNN.md)** implements Dynamic Recurrent Acyclic [documentation](g3doc/DRAGNN.md),
Graphical Neural Networks (DRAGNN), a framework for building multi-task, [paper](https://arxiv.org/pdf/1703.04474.pdf)** implements Dynamic Recurrent
fully dynamic constructed computation graphs. Practically, we use DRAGNN to Acyclic Graphical Neural Networks (DRAGNN), a framework for building
extend our prior work from [Andor et al. multi-task, fully dynamically constructed computation graphs. Practically, we
use DRAGNN to extend our prior work from [Andor et al.
(2016)](http://arxiv.org/abs/1603.06042) with end-to-end, deep recurrent (2016)](http://arxiv.org/abs/1603.06042) with end-to-end, deep recurrent
models and to provide a much easier to use interface to SyntaxNet. models and to provide a much easier to use interface to SyntaxNet. *DRAGNN
is designed first and foremost as a Python library, and therefore much
easier to use than the original SyntaxNet implementation.*
1. **SyntaxNet: 1. **SyntaxNet:
[code](https://github.com/tensorflow/models/tree/master/syntaxnet/syntaxnet), [code](https://github.com/tensorflow/models/tree/master/syntaxnet/syntaxnet),
[documentation](g3doc/syntaxnet-tutorial.md)** is a transition-based [documentation](g3doc/syntaxnet-tutorial.md)** is a transition-based
...@@ -42,7 +46,7 @@ There are three ways to use SyntaxNet: ...@@ -42,7 +46,7 @@ There are three ways to use SyntaxNet:
SyntaxNet/DRAGNN baseline for the CoNLL2017 Shared Task, and running the SyntaxNet/DRAGNN baseline for the CoNLL2017 Shared Task, and running the
ParseySaurus models. ParseySaurus models.
* You can use DRAGNN to train your NLP models for other tasks and dataset. See * You can use DRAGNN to train your NLP models for other tasks and dataset. See
"Getting started with DRAGNN below." "Getting started with DRAGNN" below.
* You can continue to use the Parsey McParseface family of pre-trained * You can continue to use the Parsey McParseface family of pre-trained
SyntaxNet models. See "Pre-trained NLP models" below. SyntaxNet models. See "Pre-trained NLP models" below.
...@@ -117,9 +121,13 @@ We have a few guides on this README, as well as more extensive ...@@ -117,9 +121,13 @@ We have a few guides on this README, as well as more extensive
![DRAGNN](g3doc/unrolled-dragnn.png) ![DRAGNN](g3doc/unrolled-dragnn.png)
An easy and visual way to get started with DRAGNN is to run [our Jupyter An easy and visual way to get started with DRAGNN is to run our Jupyter
Notebook](examples/dragnn/basic_parser_tutorial.ipynb). Our tutorial notebooks for [interactive
debugging](examples/dragnn/interactive_text_analyzer.ipynb) and [training a new
model](examples/dragnn/trainer_tutorial.ipynb). Our tutorial
[here](g3doc/CLOUD.md) explains how to start it up from the Docker container. [here](g3doc/CLOUD.md) explains how to start it up from the Docker container.
Once you have DRAGNN installed and running, try out the
[ParseySaurus](g3doc/conll2017) models.
### Using the Pre-trained NLP models ### Using the Pre-trained NLP models
...@@ -285,6 +293,7 @@ Original authors of the code in this package include (in alphabetical order): ...@@ -285,6 +293,7 @@ Original authors of the code in this package include (in alphabetical order):
* Aliaksei Severyn * Aliaksei Severyn
* Andy Golding * Andy Golding
* Bernd Bohnet * Bernd Bohnet
* Chayut Thanapirom
* Chris Alberti * Chris Alberti
* Daniel Andor * Daniel Andor
* David Weiss * David Weiss
...@@ -294,6 +303,7 @@ Original authors of the code in this package include (in alphabetical order): ...@@ -294,6 +303,7 @@ Original authors of the code in this package include (in alphabetical order):
* Ji Ma * Ji Ma
* Keith Hall * Keith Hall
* Kuzman Ganchev * Kuzman Ganchev
* Lingpeng Kong
* Livio Baldini Soares * Livio Baldini Soares
* Mark Omernick * Mark Omernick
* Michael Collins * Michael Collins
......
# You need to build wheels before building this image. Please consult
# docker-devel/README.txt.
# This is the base of the openjdk image.
#
# It might be more efficient to use a minimal distribution, like Alpine. But
# the upside of this being popular is that people might already have it.
FROM buildpack-deps:jessie-curl
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN apt-get update \
&& apt-get install -y \
file \
git \
graphviz \
libcurl3 \
libfreetype6 \
libgraphviz-dev \
liblapack3 \
libopenblas-base \
libpng12-0 \
libxft2 \
python-dev \
python-mock \
python-pip \
python2.7 \
zlib1g-dev \
&& apt-get clean \
&& (rm -f /var/cache/apt/archives/*.deb \
/var/cache/apt/archives/partial/*.deb /var/cache/apt/*.bin || true)
# Install common Python dependencies. Similar to above, remove caches
# afterwards to help keep Docker images smaller.
RUN pip install --ignore-installed pip \
&& python -m pip install numpy \
&& rm -rf /root/.cache/pip /tmp/pip*
RUN python -m pip install \
asciitree \
ipykernel \
jupyter \
matplotlib \
pandas \
protobuf \
scipy \
sklearn \
&& python -m ipykernel.kernelspec \
&& python -m pip install pygraphviz \
--install-option="--include-path=/usr/include/graphviz" \
--install-option="--library-path=/usr/lib/graphviz/" \
&& rm -rf /root/.cache/pip /tmp/pip*
COPY syntaxnet_with_tensorflow-0.2-cp27-none-linux_x86_64.whl $SYNTAXNETDIR/
RUN python -m pip install \
$SYNTAXNETDIR/syntaxnet_with_tensorflow-0.2-cp27-none-linux_x86_64.whl \
&& rm -rf /root/.cache/pip /tmp/pip*
# This makes the IP exposed actually "*"; we'll do host restrictions by passing
# a hostname to the `docker run` command.
COPY tensorflow/tensorflow/tools/docker/jupyter_notebook_config.py /root/.jupyter/
EXPOSE 8888
# This does not need to be compiled, only copied.
COPY examples $SYNTAXNETDIR/syntaxnet/examples
# For some reason, this works if we run it in a bash shell :/ :/ :/
CMD /bin/bash -c "python -m jupyter_core.command notebook --debug --notebook-dir=/opt/tensorflow/syntaxnet/examples"
Docker is used for packaging the SyntaxNet. There are three primary things we
build with Docker,
1. A development image, which contains all source built with Bazel.
2. Python/pip wheels, built by running a command in the development container.
3. A minified image, which only has the compiled version of TensorFlow and
SyntaxNet, by installing the wheel built by the above step.
Important info (please read)
------------------------------
One thing to be wary of is that YOU CAN LOSE DATA IF YOU DEVELOP IN A DOCKER
CONTAINER. Please be very careful to mount data you care about to Docker
volumes, or use a volume mount so that it's mapped to your host filesystem.
Another note, especially relevant to training models, is that Docker sends the
whole source tree to the Docker daemon every time you try to build an image.
This can take some time if you have large temporary model files lying around.
You can exclude your model files by editing .dockerignore, or just don't store
them in the base directory.
Step 1: Building the development image
------------------------------
Simply run `docker build -t dragnn-oss .` in the base directory. Make sure you
have all the source checked out correctly, including git submodules.
Step 2: Building wheels
------------------------------
Please run,
bash ./docker-devel/build_wheels.sh
This actually builds the image from Step 1 as well.
Step 3: Building the development image
------------------------------
First, ensure you have the file
syntaxnet_with_tensorflow-0.2-cp27-none-linux_x86_64.whl
in your working directory, from step 2. Then run,
docker build -t dragnn-oss:latest-minimal -f docker-devel/Dockerfile.min
If the filename changes (e.g. you are on a different architecture), just update
Dockerfile.min.
Developing in Docker
------------------------------
We recommend developing in Docker by using the `./docker-devel/build_devel.sh`
script; it will set up a few volume mounts, and port mappings automatically.
You may want to add more port mappings on your own. If you want to drop into a
shell instead of launching the notebook, simply run,
./docker-devel/build_devel.sh /bin/bash
...@@ -23,5 +23,6 @@ syntaxnet_base="/opt/tensorflow/syntaxnet" ...@@ -23,5 +23,6 @@ syntaxnet_base="/opt/tensorflow/syntaxnet"
docker run --rm -ti \ docker run --rm -ti \
-v "${root_path}"/syntaxnet:"${syntaxnet_base}"/syntaxnet \ -v "${root_path}"/syntaxnet:"${syntaxnet_base}"/syntaxnet \
-v "${root_path}"/dragnn:"${syntaxnet_base}"/dragnn \ -v "${root_path}"/dragnn:"${syntaxnet_base}"/dragnn \
-v "${root_path}"/examples:"${syntaxnet_base}"/examples \
-p 127.0.0.1:8888:8888 \ -p 127.0.0.1:8888:8888 \
dragnn-oss "$@" dragnn-oss "$@"
...@@ -13,14 +13,10 @@ package syntaxnet.dragnn; ...@@ -13,14 +13,10 @@ package syntaxnet.dragnn;
message MasterSpec { message MasterSpec {
repeated ComponentSpec component = 1; repeated ComponentSpec component = 1;
// DEPRECATED: Use the "batch_size" param of DragnnTensorFlowTrainer instead.
optional int32 deprecated_batch_size = 2 [default = 1, deprecated = true];
// DEPRECATED: Use ComponentSpec.*_beam_size instead.
optional int32 deprecated_beam_size = 3 [default = 1, deprecated = true];
// Whether to extract debug traces. // Whether to extract debug traces.
optional bool debug_tracing = 4 [default = false]; optional bool debug_tracing = 4 [default = false];
reserved 2, 3, 5;
} }
// Complete specification for a single task. // Complete specification for a single task.
...@@ -221,10 +217,6 @@ message GridPoint { ...@@ -221,10 +217,6 @@ message GridPoint {
// problems for updates at the start of training. // problems for updates at the start of training.
optional double gradient_clip_norm = 11 [default = 0.0]; optional double gradient_clip_norm = 11 [default = 0.0];
// DEPRECATED: Use TrainTarget instead.
repeated double component_weights = 5;
repeated bool unroll_using_oracle = 6;
// A spec for using multiple optimization methods. // A spec for using multiple optimization methods.
message CompositeOptimizerSpec { message CompositeOptimizerSpec {
// First optimizer. // First optimizer.
...@@ -254,6 +246,8 @@ message GridPoint { ...@@ -254,6 +246,8 @@ message GridPoint {
// should be restricted. If left empty, no filtering will take // should be restricted. If left empty, no filtering will take
// place. Typically a single component. // place. Typically a single component.
optional string self_norm_components_filter = 21; optional string self_norm_components_filter = 21;
reserved 5, 6;
} }
// Training target to be built into the graph. // Training target to be built into the graph.
......
...@@ -154,6 +154,7 @@ py_test( ...@@ -154,6 +154,7 @@ py_test(
srcs = ["visualization_test.py"], srcs = ["visualization_test.py"],
deps = [ deps = [
":visualization", ":visualization",
"//dragnn/protos:spec_py_pb2",
"//dragnn/protos:trace_py_pb2", "//dragnn/protos:trace_py_pb2",
"@org_tensorflow//tensorflow:tensorflow_py", "@org_tensorflow//tensorflow:tensorflow_py",
], ],
......
...@@ -54,6 +54,15 @@ def parse_trace_json(trace): ...@@ -54,6 +54,15 @@ def parse_trace_json(trace):
return as_json return as_json
def _optional_master_spec_json(master_spec):
"""Helper function to return 'null' or a master spec JSON string."""
if master_spec is None:
return 'null'
else:
return json_format.MessageToJson(
master_spec, preserving_proto_field_name=True)
def _container_div(height='700px', contents=''): def _container_div(height='700px', contents=''):
elt_id = str(uuid.uuid4()) elt_id = str(uuid.uuid4())
html = """ html = """
...@@ -64,7 +73,11 @@ def _container_div(height='700px', contents=''): ...@@ -64,7 +73,11 @@ def _container_div(height='700px', contents=''):
return elt_id, html return elt_id, html
def trace_html(trace, convert_to_unicode=True, height='700px', script=None): def trace_html(trace,
convert_to_unicode=True,
height='700px',
script=None,
master_spec=None):
"""Generates HTML that will render a master trace. """Generates HTML that will render a master trace.
This will result in a self-contained "div" element. This will result in a self-contained "div" element.
...@@ -76,6 +89,8 @@ def trace_html(trace, convert_to_unicode=True, height='700px', script=None): ...@@ -76,6 +89,8 @@ def trace_html(trace, convert_to_unicode=True, height='700px', script=None):
often pass the output of this function to IPython.display.HTML. often pass the output of this function to IPython.display.HTML.
height: CSS string representing the height of the element, default '700px'. height: CSS string representing the height of the element, default '700px'.
script: Visualization script contents, if the defaults are unacceptable. script: Visualization script contents, if the defaults are unacceptable.
master_spec: Master spec proto (parsed), which can improve the layout. May
be required in future versions.
Returns: Returns:
unicode or str with HTML contents. unicode or str with HTML contents.
...@@ -89,10 +104,14 @@ def trace_html(trace, convert_to_unicode=True, height='700px', script=None): ...@@ -89,10 +104,14 @@ def trace_html(trace, convert_to_unicode=True, height='700px', script=None):
{div_html} {div_html}
<script type='text/javascript'> <script type='text/javascript'>
{script} {script}
visualizeToDiv({json}, "{elt_id}"); visualizeToDiv({json}, "{elt_id}", {master_spec_json});
</script> </script>
""".format( """.format(
script=script, json=json_trace, elt_id=elt_id, div_html=div_html) script=script,
json=json_trace,
master_spec_json=_optional_master_spec_json(master_spec),
elt_id=elt_id,
div_html=div_html)
return unicode(as_str, 'utf-8') if convert_to_unicode else as_str return unicode(as_str, 'utf-8') if convert_to_unicode else as_str
...@@ -174,11 +193,13 @@ class InteractiveVisualization(object): ...@@ -174,11 +193,13 @@ class InteractiveVisualization(object):
script=script, div_html=div_html) script=script, div_html=div_html)
return unicode(html, 'utf-8') # IPython expects unicode. return unicode(html, 'utf-8') # IPython expects unicode.
def show_trace(self, trace): def show_trace(self, trace, master_spec=None):
"""Returns a JS script HTML fragment, which will populate the container. """Returns a JS script HTML fragment, which will populate the container.
Args: Args:
trace: binary-encoded MasterTrace string. trace: binary-encoded MasterTrace string.
master_spec: Master spec proto (parsed), which can improve the layout. May
be required in future versions.
Returns: Returns:
unicode with HTML contents. unicode with HTML contents.
...@@ -187,8 +208,10 @@ class InteractiveVisualization(object): ...@@ -187,8 +208,10 @@ class InteractiveVisualization(object):
<meta charset="utf-8"/> <meta charset="utf-8"/>
<script type='text/javascript'> <script type='text/javascript'>
document.getElementById("{elt_id}").innerHTML = ""; // Clear previous. document.getElementById("{elt_id}").innerHTML = ""; // Clear previous.
visualizeToDiv({json}, "{elt_id}"); visualizeToDiv({json}, "{elt_id}", {master_spec_json});
</script> </script>
""".format( """.format(
json=parse_trace_json(trace), elt_id=self.elt_id) json=parse_trace_json(trace),
master_spec_json=_optional_master_spec_json(master_spec),
elt_id=self.elt_id)
return unicode(html, 'utf-8') # IPython expects unicode. return unicode(html, 'utf-8') # IPython expects unicode.
# -*- coding: utf-8 -*-
"""Tests for dragnn.python.visualization.""" """Tests for dragnn.python.visualization."""
from __future__ import absolute_import from __future__ import absolute_import
...@@ -5,6 +6,7 @@ from __future__ import division ...@@ -5,6 +6,7 @@ from __future__ import division
from __future__ import print_function from __future__ import print_function
from tensorflow.python.platform import googletest from tensorflow.python.platform import googletest
from dragnn.protos import spec_pb2
from dragnn.protos import trace_pb2 from dragnn.protos import trace_pb2
from dragnn.python import visualization from dragnn.python import visualization
...@@ -15,10 +17,16 @@ def _get_trace_proto_string(): ...@@ -15,10 +17,16 @@ def _get_trace_proto_string():
step_trace=[ step_trace=[
trace_pb2.ComponentStepTrace(fixed_feature_trace=[]), trace_pb2.ComponentStepTrace(fixed_feature_trace=[]),
], ],
name='test_component',) # Google Translate says this is "component" in Chinese. (To test UTF-8).
name='零件',)
return trace.SerializeToString() return trace.SerializeToString()
def _get_master_spec():
return spec_pb2.MasterSpec(
component=[spec_pb2.ComponentSpec(name='jalapeño')])
class VisualizationTest(googletest.TestCase): class VisualizationTest(googletest.TestCase):
def testCanFindScript(self): def testCanFindScript(self):
...@@ -37,6 +45,13 @@ class VisualizationTest(googletest.TestCase): ...@@ -37,6 +45,13 @@ class VisualizationTest(googletest.TestCase):
widget.initial_html() widget.initial_html()
widget.show_trace(_get_trace_proto_string()) widget.show_trace(_get_trace_proto_string())
def testMasterSpecJson(self):
visualization.trace_html(
_get_trace_proto_string(), master_spec=_get_master_spec())
widget = visualization.InteractiveVisualization()
widget.initial_html()
widget.show_trace(_get_trace_proto_string(), master_spec=_get_master_spec())
if __name__ == '__main__': if __name__ == '__main__':
googletest.main() googletest.main()
...@@ -189,6 +189,25 @@ DragnnLayout.prototype.finalLayout = function(partition, stepPartition, cy) { ...@@ -189,6 +189,25 @@ DragnnLayout.prototype.finalLayout = function(partition, stepPartition, cy) {
Math.sign(slope) * Math.min(300, Math.max(100, Math.abs(slope))); Math.sign(slope) * Math.min(300, Math.max(100, Math.abs(slope)));
}); });
// Reset ordering of components based on whether they are actually
// left-to-right. In the future, we may want to do the whole layout based on
// the master spec (what remains is slope magnitude and component order); then
// we can also skip the initial layout and CoSE intermediate layout.
if (this.options.masterSpec) {
_.each(this.options.masterSpec.component, (component) => {
const name = component.name;
const transitionParams = component.transition_system.parameters || {};
// null/undefined should default to true.
const leftToRight = transitionParams.left_to_right != 'false';
// If the slope isn't going in the direction it should, according to the
// master spec, reverse it.
if ((leftToRight ? 1 : -1) != Math.sign(stepSlope[name])) {
stepSlope[name] = -stepSlope[name];
}
});
}
// Set new node positions. As before, component nodes auto-size to fit. // Set new node positions. As before, component nodes auto-size to fit.
_.each(stepPartition, (stepNodes) => { _.each(stepPartition, (stepNodes) => {
const component = _.head(stepNodes).data('parent'); const component = _.head(stepNodes).data('parent');
......
This diff is collapsed.
This diff is collapsed.
...@@ -154,10 +154,13 @@ class InteractiveDragnnGraph { ...@@ -154,10 +154,13 @@ class InteractiveDragnnGraph {
* *
* @param {!Object} masterTrace Master trace proto from DRAGNN. * @param {!Object} masterTrace Master trace proto from DRAGNN.
* @param {!Object} element Container DOM element to populate. * @param {!Object} element Container DOM element to populate.
* @param {?Object} masterSpec Master spec proto from DRAGNN; if provided,
* used to improve the layout.
*/ */
constructor(masterTrace, element) { constructor(masterTrace, element, masterSpec) {
this.masterTrace = masterTrace; this.masterTrace = masterTrace;
this.element = element; this.element = element;
this.masterSpec = masterSpec || null;
} }
/** /**
...@@ -196,7 +199,7 @@ class InteractiveDragnnGraph { ...@@ -196,7 +199,7 @@ class InteractiveDragnnGraph {
sel.abscomp().nodes().hide(); sel.abscomp().nodes().hide();
// Redo layout. // Redo layout.
this.cy.layout({name: 'dragnn'}); this.cy.layout({name: 'dragnn', masterSpec: this.masterSpec});
} }
/** /**
...@@ -211,7 +214,7 @@ class InteractiveDragnnGraph { ...@@ -211,7 +214,7 @@ class InteractiveDragnnGraph {
boxSelectionEnabled: true, boxSelectionEnabled: true,
autounselectify: true, autounselectify: true,
// We'll do more custom layout later. // We'll do more custom layout later.
layout: {name: 'dragnn'}, layout: {name: 'dragnn', masterSpec: this.masterSpec},
style: [ style: [
{ {
selector: 'node', selector: 'node',
...@@ -285,12 +288,15 @@ class InteractiveDragnnGraph { ...@@ -285,12 +288,15 @@ class InteractiveDragnnGraph {
* situations, the script tag containing the graph definition will be generated * situations, the script tag containing the graph definition will be generated
* inline. * inline.
* *
* @param {Object} masterTrace Master trace proto from DRAGNN. * @param {!Object} masterTrace Master trace proto from DRAGNN.
* @param {string} divId ID of the page element to populate with the graph. * @param {string} divId ID of the page element to populate with the graph.
* @param {?Object} masterSpec Master spec proto from DRAGNN; if provided, used
* to improve the layout.
*/ */
const visualizeToDiv = function(masterTrace, divId) {
const interactiveGraph = const visualizeToDiv = function(masterTrace, divId, masterSpec) {
new InteractiveDragnnGraph(masterTrace, document.getElementById(divId)); const interactiveGraph = new InteractiveDragnnGraph(
masterTrace, document.getElementById(divId), masterSpec);
interactiveGraph.initDomElements(); interactiveGraph.initDomElements();
}; };
......
py_binary(
name = "tutorial_1",
srcs = ["tutorial_1.py"],
data = [":data"],
deps = [":tutorial-deps"],
)
py_binary(
name = "tutorial_2",
srcs = ["tutorial_2.py"],
data = [":data"],
deps = [":tutorial-deps"],
)
py_library(
name = "tutorial-deps",
deps = [
"//dragnn/core:dragnn_bulk_ops",
"//dragnn/core:dragnn_ops",
"//dragnn/protos:spec_py_pb2",
"//dragnn/python:graph_builder",
"//dragnn/python:lexicon",
"//dragnn/python:load_dragnn_cc_impl_py",
"//dragnn/python:spec_builder",
"//dragnn/python:visualization",
"//syntaxnet:load_parser_ops_py",
"//syntaxnet:parser_ops",
"//syntaxnet:sentence_py_pb2",
"@org_tensorflow//tensorflow:tensorflow_py",
"@org_tensorflow//tensorflow/core:protos_all_py",
],
)
filegroup(
name = "data",
data = glob(["tutorial_data/*"]),
)
sh_test(
name = "test_run_all_tutorials",
size = "medium",
srcs = ["test_run_all_tutorials.sh"],
args = [
"$(location :tutorial_1)",
"$(location :tutorial_2)",
],
data = [
":tutorial_1",
":tutorial_2",
],
)
99
e 88088
t 63849
a 60376
o 55522
n 50760
i 49620
s 43451
r 43314
h 34531
l 30483
d 27017
u 20955
c 19696
m 17547
y 15363
f 14604
g 14285
p 13521
w 13380
9 11333
. 10880
b 10234
v 7728
, 6776
k 6200
I 5027
- 3666
T 3262
A 2967
S 2823
' 2382
C 1954
E 1763
M 1748
P 1662
B 1457
x 1455
N 1420
W 1285
H 1248
" 1233
D 1214
O 1212
R 1153
! 1135
/ 1124
L 1094
: 1067
j 924
? 915
) 892
F 887
G 875
q 873
( 827
U 709
J 660
Y 588
z 566
_ 539
K 499
V 372
= 369
* 303
$ 254
@ 177
& 155
> 151
< 143
Q 142
; 100
’ 94
Z 92
X 73
# 69
+ 51
% 39
[ 34
] 34
“ 30
” 30
| 25
~ 17
` 15
‘ 13
– 9
— 9
^ 8
… 7
· 6
{ 4
} 3
é 2
£ 1
­ 1
³ 1
à 1
á 1
ç 1
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment