Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
Fairseq
Commits
8df95dcc
Commit
8df95dcc
authored
Oct 27, 2017
by
Myle Ott
Browse files
Upgrade args with max_source_positions and max_target_positions
parent
5ef59abd
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
8 additions
and
0 deletions
+8
-0
fairseq/utils.py
fairseq/utils.py
+8
-0
No files found.
fairseq/utils.py
View file @
8df95dcc
...
@@ -127,6 +127,7 @@ def load_ensemble_for_inference(filenames, src_dict, dst_dict):
...
@@ -127,6 +127,7 @@ def load_ensemble_for_inference(filenames, src_dict, dst_dict):
torch
.
load
(
filename
,
map_location
=
lambda
s
,
l
:
default_restore_location
(
s
,
'cpu'
))
torch
.
load
(
filename
,
map_location
=
lambda
s
,
l
:
default_restore_location
(
s
,
'cpu'
))
)
)
args
=
states
[
0
][
'args'
]
args
=
states
[
0
][
'args'
]
args
=
_upgrade_args
(
args
)
# build ensemble
# build ensemble
ensemble
=
[]
ensemble
=
[]
...
@@ -137,6 +138,13 @@ def load_ensemble_for_inference(filenames, src_dict, dst_dict):
...
@@ -137,6 +138,13 @@ def load_ensemble_for_inference(filenames, src_dict, dst_dict):
return
ensemble
return
ensemble
def
_upgrade_args
(
args
):
if
not
hasattr
(
args
,
'max_source_positions'
):
args
.
max_source_positions
=
args
.
max_positions
args
.
max_target_positions
=
args
.
max_positions
return
args
def
prepare_sample
(
sample
,
volatile
=
False
,
cuda_device
=
None
):
def
prepare_sample
(
sample
,
volatile
=
False
,
cuda_device
=
None
):
"""Wrap input tensors in Variable class."""
"""Wrap input tensors in Variable class."""
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment