Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
OpenDAS
mmdetection3d
Commits
98cfb2ee
Commit
98cfb2ee
authored
Jul 04, 2020
by
wangtai
Committed by
zhangwenwei
Jul 04, 2020
Browse files
Add API to support Lyft
parent
3f4c655c
Changes
18
Show whitespace changes
Inline
Side-by-side
Showing
18 changed files
with
1798 additions
and
65 deletions
+1798
-65
data/lyft/test.txt
data/lyft/test.txt
+218
-0
data/lyft/train.txt
data/lyft/train.txt
+150
-0
data/lyft/val.txt
data/lyft/val.txt
+30
-0
docs/getting_started.md
docs/getting_started.md
+21
-0
mmdet3d/core/evaluation/__init__.py
mmdet3d/core/evaluation/__init__.py
+2
-1
mmdet3d/core/evaluation/lyft_eval.py
mmdet3d/core/evaluation/lyft_eval.py
+285
-0
mmdet3d/datasets/__init__.py
mmdet3d/datasets/__init__.py
+4
-3
mmdet3d/datasets/custom_3d.py
mmdet3d/datasets/custom_3d.py
+2
-1
mmdet3d/datasets/kitti_dataset.py
mmdet3d/datasets/kitti_dataset.py
+37
-3
mmdet3d/datasets/lyft_dataset.py
mmdet3d/datasets/lyft_dataset.py
+486
-0
mmdet3d/datasets/nuscenes_dataset.py
mmdet3d/datasets/nuscenes_dataset.py
+43
-7
mmdet3d/datasets/scannet_dataset.py
mmdet3d/datasets/scannet_dataset.py
+29
-0
mmdet3d/datasets/sunrgbd_dataset.py
mmdet3d/datasets/sunrgbd_dataset.py
+29
-0
requirements/runtime.txt
requirements/runtime.txt
+1
-0
tools/create_data.py
tools/create_data.py
+103
-7
tools/data_converter/kitti_converter.py
tools/data_converter/kitti_converter.py
+17
-0
tools/data_converter/lyft_converter.py
tools/data_converter/lyft_converter.py
+259
-0
tools/data_converter/nuscenes_converter.py
tools/data_converter/nuscenes_converter.py
+82
-43
No files found.
data/lyft/test.txt
0 → 100644
View file @
98cfb2ee
host-a004-lidar0-1233944976297786786-1233945001198600096
host-a004-lidar0-1233941213298388436-1233941238199278096
host-a011-lidar0-1234644740299444586-1234644765198350636
host-a004-lidar0-1233601648198462856-1233601673098488556
host-a011-lidar0-1232746157199035666-1232746182098240026
host-a009-lidar0-1236017375097801876-1236017399997624556
host-a011-lidar0-1233962329198070906-1233962354099359636
host-a007-lidar0-1233523259198624906-1233523284098400466
host-a004-lidar0-1233427443198938856-1233427468098241556
host-a007-lidar0-1233952760198014706-1233952785098360666
host-a004-lidar0-1232831114198639196-1232831139098689906
host-a011-lidar0-1233959097298631226-1233959122199133096
host-a011-lidar0-1232736280198893346-1232736305098557556
host-a008-lidar0-1235757362198327706-1235757387098420756
host-a007-lidar0-1233947142198222706-1233947167098201666
host-a009-lidar0-1234715568198453346-1234715593098413676
host-a004-lidar0-1232911076298600176-1232911101199054856
host-a011-lidar0-1236103955299212856-1236103980199733906
host-a007-lidar0-1237572884098557116-1237572908997807546
host-a011-lidar0-1233964240298899786-1233964265199453556
host-a004-lidar0-1233079262298510786-1233079287198908466
host-a004-lidar0-1233687421297771586-1233687446198032636
host-a008-lidar0-1235780110998543436-1235780135897779096
host-a011-lidar0-1232488772298383876-1232488797198160346
host-a004-lidar0-1231878498298402636-1231878523198702316
host-a015-lidar0-1235952792198484436-1235952817098084116
host-a004-lidar0-1235945794298780666-1235945819198802346
host-a011-lidar0-1233963833297804226-1233963858199264096
host-a004-lidar0-1233506417198072656-1233506442098833106
host-a009-lidar0-1231801138198800186-1231801163098659866
host-a011-lidar0-1232751280197949666-1232751305099090996
host-a004-lidar0-1233946658298833786-1233946683199182096
host-a004-lidar0-1233687742297900586-1233687767199090986
host-a004-lidar0-1232905595299117226-1232905620198562226
host-a004-lidar0-1233961285198272466-1233961310098968226
host-a011-lidar0-1233085141298793636-1233085166198948316
host-a010-lidar0-1232314667198394906-1232314692099896986
host-a007-lidar0-1230931253199029066-1230931278098162746
host-a007-lidar0-1232995403098212786-1232995427998257586
host-a011-lidar0-1236106341198519026-1236106366098688706
host-a004-lidar0-1233962919198446116-1233962944098602196
host-a011-lidar0-1232744874197271346-1232744899099026346
host-a011-lidar0-1233957289298013416-1233957314197859536
host-a011-lidar0-1236039862198450906-1236039887099020026
host-a011-lidar0-1233956316299257226-1233956341199458096
host-a004-lidar0-1233521279299019346-1233521304198708656
host-a007-lidar0-1233005263198512026-1233005288098233756
host-a007-lidar0-1232995508097871466-1232995532998255586
host-a011-lidar0-1232486236299484666-1232486261197508346
host-a011-lidar0-1233959639198614466-1233959664098178176
host-a007-lidar0-1233015278098306756-1233015302998395786
host-a004-lidar0-1235858020298070196-1235858045199172906
host-a011-lidar0-1236105788197720026-1236105813099285026
host-a004-lidar0-1233439235298936546-1233439260198396226
host-a004-lidar0-1232987587298107736-1232987612198297786
host-a011-lidar0-1236103100299832666-1236103125199414996
host-a015-lidar0-1235952391197848436-1235952416098412116
host-a004-lidar0-1233521391298537346-1233521416198295656
host-a004-lidar0-1232991769198099466-1232991794097984146
host-a004-lidar0-1233953140198673466-1233953165099150176
host-a004-lidar0-1233081052299073106-1233081077199184436
host-a007-lidar0-1233956133198909706-1233956158098315666
host-a011-lidar0-1235868868199238666-1235868893097732316
host-a007-lidar0-1233953460198410706-1233953485098842786
host-a011-lidar0-1233961831198549906-1233961856098369176
host-a011-lidar0-1233514529198419226-1233514554098449346
host-a007-lidar0-1230939239197974066-1230939264099426746
host-a011-lidar0-1233091237198666226-1233091262098974026
host-a004-lidar0-1233442845299348546-1233442870197585226
host-a009-lidar0-1236020549098474856-1236020573999334586
host-a011-lidar0-1234024976198295026-1234025001098882226
host-a011-lidar0-1232907883299065316-1232907908199299976
host-a004-lidar0-1233601706199079226-1233601731099035556
host-a011-lidar0-1233082653297382196-1233082678199113346
host-a011-lidar0-1236094968298801346-1236094993199079346
host-a007-lidar0-1233007278198865146-1233007303098536106
host-a004-lidar0-1232740139298853106-1232740164198443466
host-a007-lidar0-1232840098198209026-1232840123098175986
host-a007-lidar0-1232491902199024026-1232491927099237736
host-a004-lidar0-1233447945198501226-1233447970098212556
host-a004-lidar0-1233963468197846466-1233963493098405196
host-a011-lidar0-1232497134299369316-1232497159198333026
host-a007-lidar0-1233683913198228346-1233683938097909026
host-a004-lidar0-1233965315199029466-1233965340097937546
host-a011-lidar0-1236106510198830536-1236106535098940676
host-a011-lidar0-1234031888198360146-1234031913099866226
host-a011-lidar0-1232483258298822666-1232483283199104026
host-a011-lidar0-1233963883298333416-1233963908199031906
host-a011-lidar0-1232411607198546106-1232411632099155466
host-a009-lidar0-1236120835298099026-1236120860197794756
host-a004-lidar0-1233089444197814786-1233089469098441116
host-a004-lidar0-1233946716298797436-1233946741199280636
host-a004-lidar0-1233443288299030176-1233443313198834856
host-a004-lidar0-1233088880198387436-1233088905098238096
host-a011-lidar0-1233956935299563906-1233956960199421666
host-a009-lidar0-1236020134098279876-1236020158999209906
host-a011-lidar0-1236094859299152316-1236094884199534466
host-a008-lidar0-1235758174198256146-1235758199098282106
host-a011-lidar0-1236121184299486346-1236121209198975996
host-a011-lidar0-1233961523199115586-1233961548099271666
host-a004-lidar0-1234732754197914906-1234732779097924636
host-a009-lidar0-1236121511198446736-1236121536098500436
host-a008-lidar0-1236034565298114876-1236034590198781906
host-a006-lidar0-1236098659198274536-1236098684097992536
host-a009-lidar0-1236013611198536176-1236013636098119856
host-a015-lidar0-1233957342198356906-1233957367097686986
host-a011-lidar0-1234028898199503706-1234028923098637226
host-a007-lidar0-1234564891198141676-1234564916098497756
host-a011-lidar0-1233512411200132666-1233512436099006556
host-a011-lidar0-1232839888198187226-1232839913098323346
host-a011-lidar0-1233959147299652876-1233959172199345586
host-a011-lidar0-1233515019199540346-1233515044098381026
host-a007-lidar0-1233621306298394226-1233621331197987026
host-a009-lidar0-1236018631099309906-1236018655997712116
host-a011-lidar0-1233956770298774786-1233956795198595906
host-a011-lidar0-1234031012198940656-1234031037098235226
host-a011-lidar0-1232834951198260666-1232834976099168996
host-a004-lidar0-1231810077298582906-1231810102198764586
host-a015-lidar0-1235432061197143666-1235432086098855666
host-a004-lidar0-1233955370199325146-1233955395099287546
host-a007-lidar0-1232739122099041026-1232739146998701786
host-a008-lidar0-1231272360198562866-1231272385098213606
host-a009-lidar0-1234043856197659756-1234043881098769786
host-a011-lidar0-1236123850299533346-1236123875199183536
host-a004-lidar0-1232829779198385176-1232829804099132906
host-a004-lidar0-1234046078298238146-1234046103198417226
host-a011-lidar0-1236118880299007316-1236118905198814556
host-a011-lidar0-1232839522198475666-1232839547098176996
host-a004-lidar0-1233683047198434616-1233683072097863296
host-a011-lidar0-1232483134298411316-1232483159199017026
host-a011-lidar0-1232835366199044316-1232835391098346996
host-a011-lidar0-1236037587198526556-1236037612099015026
host-a004-lidar0-1233514041198025676-1233514066098168106
host-a011-lidar0-1233514916198928666-1233514941098966346
host-a007-lidar0-1233952607199156666-1233952632098069666
host-a011-lidar0-1234655115198110986-1234655140099130346
host-a015-lidar0-1236124264998206536-1236124289898560636
host-a011-lidar0-1234644690299314096-1234644715199118176
host-a011-lidar0-1235952056198818296-1235952081099333786
host-a011-lidar0-1232748626199298346-1232748651098789346
host-a007-lidar0-1232475199297946856-1232475224198641556
host-a007-lidar0-1233525441098373116-1233525465998053546
host-a008-lidar0-1235761573098617226-1235761597998604906
host-a004-lidar0-1232991714198772466-1232991739097967096
host-a004-lidar0-1233687683299064556-1233687708198041986
host-a007-lidar0-1233013003098382226-1233013027998697786
host-a004-lidar0-1233439306298453176-1233439331197984226
host-a004-lidar0-1233424178198144226-1233424203098692536
host-a006-lidar0-1236097465198430226-1236097490097938026
host-a011-lidar0-1232841048199446666-1232841073098839996
host-a011-lidar0-1233516714199191856-1233516739099730116
host-a007-lidar0-1234311713998886226-1234311738898096786
host-a007-lidar0-1233536831198375786-1233536856097999586
host-a004-lidar0-1233429274199024876-1233429299099306906
host-a004-lidar0-1233959153198568116-1233959178098071546
host-a011-lidar0-1234552220298342756-1234552245199496436
host-a011-lidar0-1233521972198420856-1233521997098456536
host-a006-lidar0-1236097911198275876-1236097936098518536
host-a011-lidar0-1233689347298085226-1233689372198299556
host-a004-lidar0-1232916227198731226-1232916252098964536
host-a004-lidar0-1235943691298894636-1235943716198114296
host-a011-lidar0-1232905354298416876-1232905379198827346
host-a004-lidar0-1232825454199122176-1232825479098530856
host-a004-lidar0-1235865310198314226-1235865335098810536
host-a007-lidar0-1233511892098885096-1233511916998404176
host-a004-lidar0-1235952483297492666-1235952508198691346
host-a007-lidar0-1236123864198260676-1236123889098156106
host-a011-lidar0-1232751443198845666-1232751468099206346
host-a011-lidar0-1233078523199259666-1233078548098819996
host-a004-lidar0-1233618447298368586-1233618472198344666
host-a007-lidar0-1230678335199240106-1230678360099285186
host-a004-lidar0-1233508848199005656-1233508873098993106
host-a011-lidar0-1233958777298868906-1233958802198879556
host-a007-lidar0-1233507949098749096-1233507973999200196
host-a004-lidar0-1233953506198624096-1233953531097898546
host-a015-lidar0-1236103405197509196-1236103430098109856
host-a007-lidar0-1233620674297428346-1233620699199028906
host-a012-lidar0-1235936434298098786-1235936459197995466
host-a011-lidar0-1233514154199609666-1233514179099494586
host-a009-lidar0-1231184014198521956-1231184039098791066
host-a004-lidar0-1236019079298483436-1236019104198926466
host-a006-lidar0-1236037423198601636-1236037448098691666
host-a004-lidar0-1231888238197475346-1231888263098211346
host-a010-lidar0-1232317276099120706-1232317300998122666
host-a004-lidar0-1232815694198030546-1232815719097692906
host-a007-lidar0-1233954463198147636-1233954488098173786
host-a004-lidar0-1232923791198849226-1232923816098237586
host-a011-lidar0-1236106196198429346-1236106221098810676
host-a015-lidar0-1236124649998497586-1236124674897796616
host-a004-lidar0-1232386904298052116-1232386929198958786
host-a014-lidar0-1235764007298887586-1235764032198224636
host-a011-lidar0-1233961953198251556-1233961978098299986
host-a015-lidar0-1234646248197843296-1234646273098426346
host-a004-lidar0-1232823719198617196-1232823744099030906
host-a006-lidar0-1232910084198374756-1232910109098099436
host-a008-lidar0-1231535639098399806-1231535663998755486
host-a006-lidar0-1232909811198488106-1232909836098046436
host-a011-lidar0-1234472721299135436-1234472746198770146
host-a004-lidar0-1233941123298835466-1233941148198539096
host-a004-lidar0-1232842293198603176-1232842318098888226
host-a011-lidar0-1235503241198830666-1235503266098892346
host-a007-lidar0-1233007421198689676-1233007446098306106
host-a011-lidar0-1235942334298364116-1235942359197798196
host-a011-lidar0-1233964524299020906-1233964549198793586
host-a011-lidar0-1236094810299383666-1236094835198207026
host-a004-lidar0-1233439181298702546-1233439206198541786
host-a004-lidar0-1233429602198887906-1233429627098124906
host-a011-lidar0-1233688997299519416-1233689022199633116
host-a011-lidar0-1233088421199045856-1233088446099117556
host-a011-lidar0-1235866237298160096-1235866262198355986
host-a004-lidar0-1232825508197625546-1232825533098566906
host-a007-lidar0-1233620829298440226-1233620854197980906
host-a011-lidar0-1233962268199095556-1233962293098293666
host-a009-lidar0-1236017846098562856-1236017870998139586
host-a004-lidar0-1233521339297626996-1233521364198425656
host-a011-lidar0-1233962379199267906-1233962404098719666
host-a007-lidar0-1234043217198181466-1234043242098437196
host-a007-lidar0-1233952679199347706-1233952704098573316
\ No newline at end of file
data/lyft/train.txt
0 → 100644
View file @
98cfb2ee
host-a101-lidar0-1241893239199111666-1241893264098084346
host-a006-lidar0-1236037883198113706-1236037908098879296
host-a011-lidar0-1235950297199142196-1235950322099405416
host-a012-lidar0-1235937130198577346-1235937155098071026
host-a101-lidar0-1240875136198305786-1240875161098795094
host-a011-lidar0-1232752461198357666-1232752486099793996
host-a102-lidar0-1241468916398562586-1241468941298742334
host-a009-lidar0-1236013297198927176-1236013322098616226
host-a008-lidar0-1235777217098625786-1235777241998473466
host-a009-lidar0-1236015606098226876-1236015630998447586
host-a011-lidar0-1236119823299280856-1236119848199397346
host-a011-lidar0-1233963416198495906-1233963441098571986
host-a102-lidar0-1241904536298706586-1241904561198322666
host-a004-lidar0-1236021339298624436-1236021364198408146
host-a011-lidar0-1233512833198873346-1233512858098831906
host-a007-lidar0-1232470052198454586-1232470077098888666
host-a004-lidar0-1233961181197891466-1233961206097713176
host-a011-lidar0-1232755934199060666-1232755959099356536
host-a011-lidar0-1235866707299065096-1235866732198912176
host-a007-lidar0-1234740239998520226-1234740264899399906
host-a011-lidar0-1234031754199586656-1234031779098211226
host-a011-lidar0-1233090630199206666-1233090655098843996
host-a004-lidar0-1233617933297688906-1233617958198483986
host-a004-lidar0-1233693263298064536-1233693288197865986
host-a007-lidar0-1233510590098435146-1233510614998778546
host-a102-lidar0-1241548686398885894-1241548711298381586
host-a015-lidar0-1233957265198932906-1233957290097795666
host-a009-lidar0-1236118456298488636-1236118481198250736
host-a007-lidar0-1230936221299185986-1230936246198612066
host-a011-lidar0-1232835293199223316-1232835318097646346
host-a102-lidar0-1242755400298847586-1242755425198579666
host-a004-lidar0-1233422539197434856-1233422564099152556
host-a101-lidar0-1242580003398722214-1242580028299473214
host-a007-lidar0-1233954992197900986-1233955017097982666
host-a004-lidar0-1233685315298191906-1233685340197986666
host-a008-lidar0-1236015187198059026-1236015212098657616
host-a011-lidar0-1232905783299194316-1232905808199600976
host-a004-lidar0-1233682997198306636-1233683022098032666
host-a006-lidar0-1236038131197892706-1236038156098552296
host-a007-lidar0-1233955131199105986-1233955156098128666
host-a007-lidar0-1234737958998212856-1234737983898305906
host-a102-lidar0-1241878200398362906-1241878225298546586
host-a011-lidar0-1232492192198952026-1232492217098860706
host-a009-lidar0-1231200854198312986-1231200879098460066
host-a011-lidar0-1232839939198224316-1232839964099010996
host-a101-lidar0-1242493624298705334-1242493649198973302
host-a101-lidar0-1243095610299140346-1243095635198749774
host-a011-lidar0-1234553365299813296-1234553390199271786
host-a011-lidar0-1233087918198597316-1233087943098472996
host-a004-lidar0-1232923266198326856-1232923291098716906
host-a007-lidar0-1233689791098884906-1233689815997978986
host-a011-lidar0-1232753667198514346-1232753692099110026
host-a004-lidar0-1232987652297797736-1232987677198272416
host-a011-lidar0-1232841333199326666-1232841358099777556
host-a011-lidar0-1235931114299585906-1235931139198414556
host-a004-lidar0-1234051595199019546-1234051620099157876
host-a004-lidar0-1232815252198642176-1232815277099387856
host-a011-lidar0-1234025165199395146-1234025190098246226
host-a004-lidar0-1233618009298519906-1233618034198134636
host-a004-lidar0-1235947081298918616-1235947106198383666
host-a004-lidar0-1233535955298751556-1233535980198120666
host-a101-lidar0-1241889710198571346-1241889735098952214
host-a004-lidar0-1233442991299115176-1233443016198370876
host-a007-lidar0-1233007769198478676-1233007794098024226
host-a011-lidar0-1233688931299349876-1233688956199708556
host-a004-lidar0-1236018644297896466-1236018669198234096
host-a004-lidar0-1233947108297817786-1233947133198765096
host-a004-lidar0-1235944794298214636-1235944819199047666
host-a007-lidar0-1233524852199250466-1233524877097605116
host-a008-lidar0-1235776284099006226-1235776308998255906
host-a101-lidar0-1242748817298870302-1242748842198675302
host-a007-lidar0-1233529706297796756-1233529731198779786
host-a011-lidar0-1232909940298708666-1232909965199312906
host-a011-lidar0-1232412236198491106-1232412261098202466
host-a011-lidar0-1236038911199051026-1236038936099738706
host-a008-lidar0-1236013033198326026-1236013058097878706
host-a101-lidar0-1242753236298794334-1242753261198702302
host-a011-lidar0-1235933627299543026-1235933652198559196
host-a004-lidar0-1233620260298411906-1233620285198333986
host-a101-lidar0-1242144886399176654-1242144911299066654
host-a102-lidar0-1242684244198410786-1242684269098866094
host-a011-lidar0-1232738595197630316-1232738620098495346
host-a007-lidar0-1233508020098603466-1233508044998550666
host-a102-lidar0-1242510597398871466-1242510622298829226
host-a101-lidar0-1243102866399012786-1243102891298922466
host-a009-lidar0-1236020733098808906-1236020757997885536
host-a011-lidar0-1232745770199275666-1232745795099558556
host-a101-lidar0-1241886983298988182-1241887008198992182
host-a004-lidar0-1233693191297468536-1233693216198791636
host-a017-lidar0-1236119797198435536-1236119822098126616
host-a007-lidar0-1233515286998507736-1233515311898052226
host-a007-lidar0-1230672860198383106-1230672885099108186
host-a004-lidar0-1233014343198383656-1233014368098267106
host-a004-lidar0-1233516150198382706-1233516175098720106
host-a007-lidar0-1230485630199365106-1230485655099030186
host-a004-lidar0-1232838138198668196-1232838163098533856
host-a011-lidar0-1236037921199248346-1236037946099430676
host-a011-lidar0-1233081021198156856-1233081046098157026
host-a004-lidar0-1232825386198046196-1232825411098056856
host-a017-lidar0-1236118981198431906-1236119006097572636
host-a015-lidar0-1236103725197932106-1236103750098792856
host-a007-lidar0-1233515591998210666-1233515616898664876
host-a007-lidar0-1232736726098319676-1232736750999473736
host-a004-lidar0-1233011743198634026-1233011768099043756
host-a101-lidar0-1242749258298976334-1242749283199254466
host-a007-lidar0-1232490767197744146-1232490792098614756
host-a011-lidar0-1234466278299425556-1234466303199121616
host-a011-lidar0-1236122349298071346-1236122374198621346
host-a101-lidar0-1241216089098610756-1241216113999079830
host-a004-lidar0-1233602012198802906-1233602037098984906
host-a004-lidar0-1233421984198960226-1233422009098905556
host-a004-lidar0-1233685221298830296-1233685246198471636
host-a017-lidar0-1236118873198607026-1236118898097847616
host-a011-lidar0-1233522430198228856-1233522455098303536
host-a101-lidar0-1242748985299274334-1242749010198891466
host-a004-lidar0-1234047743298156466-1234047768199244736
host-a015-lidar0-1235423635198474636-1235423660098038666
host-a004-lidar0-1233955731199067146-1233955756098663196
host-a009-lidar0-1236118555299408756-1236118580198077756
host-a007-lidar0-1234551913098444106-1234551937998728436
host-a101-lidar0-1241472407298206026-1241472432198409706
host-a011-lidar0-1236039783198411996-1236039808098336676
host-a015-lidar0-1236112601097782876-1236112625998366556
host-a012-lidar0-1237329862198269106-1237329887099105436
host-a007-lidar0-1231093036199514746-1231093061099651426
host-a007-lidar0-1233960442199212986-1233960467099041666
host-a009-lidar0-1236123717198611786-1236123742097892436
host-a009-lidar0-1237581206198345466-1237581231098504546
host-a011-lidar0-1233964282297830226-1233964307199768556
host-a011-lidar0-1232752778198249666-1232752803099491536
host-a102-lidar0-1242662270298972894-1242662295198395706
host-a011-lidar0-1232485958298280666-1232485983200054996
host-a005-lidar0-1231201437298603426-1231201462198815506
host-a007-lidar0-1233510205099156466-1233510229998563196
host-a102-lidar0-1242754954298696742-1242754979198120666
host-a004-lidar0-1233444816298625546-1233444841198302226
host-a004-lidar0-1232842166198181226-1232842191097390226
host-a004-lidar0-1232833308197903226-1232833333099155226
host-a011-lidar0-1232401360198078026-1232401385098379106
host-a011-lidar0-1236123625299234316-1236123650197952996
host-a004-lidar0-1233427004198119856-1233427029098998676
host-a102-lidar0-1242749461398477906-1242749486298996742
host-a102-lidar0-1242150795498255026-1242150820398693830
host-a011-lidar0-1232731591298977986-1232731616197888346
host-a011-lidar0-1233964369297973906-1233964394199186906
host-a011-lidar0-1232837612197878316-1232837637099721996
host-a101-lidar0-1241462203298815998-1241462228198805706
host-a009-lidar0-1236014648198307196-1236014673098985906
host-a007-lidar0-1233956183198377616-1233956208098469296
host-a004-lidar0-1232817645198462196-1232817670098101226
\ No newline at end of file
data/lyft/val.txt
0 → 100644
View file @
98cfb2ee
host-a004-lidar0-1233080749298771736-1233080774198118416
host-a004-lidar0-1232905197298264546-1232905222198133856
host-a011-lidar0-1232732468299489666-1232732493199050666
host-a101-lidar0-1241561147998866622-1241561172899320654
host-a006-lidar0-1237322885198285226-1237322910098576786
host-a004-lidar0-1233963848198981116-1233963873098642176
host-a011-lidar0-1232752543198025666-1232752568099126026
host-a004-lidar0-1232842367198056546-1232842392097783226
host-a004-lidar0-1233615989298293586-1233616014198854636
host-a011-lidar0-1233965426299054906-1233965451199121906
host-a011-lidar0-1236104034298928316-1236104059198988026
host-a007-lidar0-1233946614199227636-1233946639098289666
host-a015-lidar0-1235423696198069636-1235423721098551296
host-a004-lidar0-1233014843199117706-1233014868098023786
host-a011-lidar0-1236093962299300416-1236093987199363346
host-a011-lidar0-1234639296198260986-1234639321099417316
host-a011-lidar0-1233524871199389346-1233524896098591466
host-a011-lidar0-1235933781298838116-1235933806199517736
host-a011-lidar0-1233965312298542226-1233965337198958586
host-a011-lidar0-1233090567199118316-1233090592098933996
host-a007-lidar0-1233621256298511876-1233621281197988026
host-a007-lidar0-1233079617197863906-1233079642098533586
host-a015-lidar0-1236112516098396876-1236112540999028556
host-a008-lidar0-1236016333197799906-1236016358099063636
host-a101-lidar0-1240710366399037786-1240710391298976894
host-a102-lidar0-1242755350298764586-1242755375198787666
host-a101-lidar0-1240877587199107226-1240877612099413030
host-a101-lidar0-1242583745399163026-1242583770298821706
host-a011-lidar0-1232817034199342856-1232817059098800346
host-a004-lidar0-1232905117299287546-1232905142198246226
\ No newline at end of file
docs/getting_started.md
View file @
98cfb2ee
...
@@ -31,6 +31,21 @@ mmdetection3d
...
@@ -31,6 +31,21 @@ mmdetection3d
│ │ │ ├── image_2
│ │ │ ├── image_2
│ │ │ ├── label_2
│ │ │ ├── label_2
│ │ │ ├── velodyne
│ │ │ ├── velodyne
│ ├── lyft
│ │ ├── v1.01-train
│ │ │ ├── v1.01-train (train_data)
│ │ │ ├── lidar (train_lidar)
│ │ │ ├── images (train_images)
│ │ │ ├── maps (train_maps)
│ │ ├── v1.01-test
│ │ │ ├── v1.01-test (test_data)
│ │ │ ├── lidar (test_lidar)
│ │ │ ├── images (test_images)
│ │ │ ├── maps (test_maps)
│ │ ├── train.txt
│ │ ├── val.txt
│ │ ├── test.txt
│ │ ├── sample_submission.csv
│ ├── scannet
│ ├── scannet
│ │ ├── meta_data
│ │ ├── meta_data
│ │ ├── scans
│ │ ├── scans
...
@@ -57,6 +72,12 @@ Download KITTI 3D detection data [HERE](http://www.cvlibs.net/datasets/kitti/eva
...
@@ -57,6 +72,12 @@ Download KITTI 3D detection data [HERE](http://www.cvlibs.net/datasets/kitti/eva
python tools/create_data.py kitti
--root-path
./data/kitti
--out-dir
./data/kitti
--extra-tag
kitti
python tools/create_data.py kitti
--root-path
./data/kitti
--out-dir
./data/kitti
--extra-tag
kitti
```
```
Download Lyft 3D detection data
[
HERE
](
https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles/data
)
. Prepare Lyft data by running
```
bash
python tools/create_data.py lyft
--root-path
./data/lyft
--out-dir
./data/lyft
--extra-tag
lyft
--version
v1.01
```
Note that we follow the original folder names for clear organization. Please rename the raw folders as shown above.
To prepare scannet data, please see
[
scannet
](
../data/scannet/README.md
)
.
To prepare scannet data, please see
[
scannet
](
../data/scannet/README.md
)
.
To prepare sunrgbd data, please see
[
sunrgbd
](
../data/sunrgbd/README.md
)
.
To prepare sunrgbd data, please see
[
sunrgbd
](
../data/sunrgbd/README.md
)
.
...
...
mmdet3d/core/evaluation/__init__.py
View file @
98cfb2ee
from
.indoor_eval
import
indoor_eval
from
.indoor_eval
import
indoor_eval
from
.kitti_utils
import
kitti_eval
,
kitti_eval_coco_style
from
.kitti_utils
import
kitti_eval
,
kitti_eval_coco_style
from
.lyft_eval
import
lyft_eval
__all__
=
[
'kitti_eval_coco_style'
,
'kitti_eval'
,
'indoor_eval'
]
__all__
=
[
'kitti_eval_coco_style'
,
'kitti_eval'
,
'indoor_eval'
,
'lyft_eval'
]
mmdet3d/core/evaluation/lyft_eval.py
0 → 100644
View file @
98cfb2ee
import
os.path
as
osp
import
mmcv
import
numpy
as
np
from
lyft_dataset_sdk.eval.detection.mAP_evaluation
import
(
Box3D
,
get_ap
,
get_class_names
,
get_ious
,
group_by_key
,
wrap_in_box
)
from
mmcv.utils
import
print_log
from
terminaltables
import
AsciiTable
def
load_lyft_gts
(
lyft
,
data_root
,
eval_split
:
str
,
logger
=
None
)
->
list
:
"""Loads ground truth boxes from database.
Args:
lyft (:obj:``LyftDataset``): Lyft class in the sdk.
data_root (str): Root of data for reading splits.
eval_split (str): Name of the split for evaluation.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
Returns:
list[dict]: List of annotation dictionaries.
"""
split_scenes
=
mmcv
.
list_from_file
(
osp
.
join
(
data_root
,
f
'
{
eval_split
}
.txt'
))
# Read out all sample_tokens in DB.
sample_tokens_all
=
[
s
[
'token'
]
for
s
in
lyft
.
sample
]
assert
len
(
sample_tokens_all
)
>
0
,
'Error: Database has no samples!'
if
eval_split
==
'test'
:
# Check that you aren't trying to cheat :)
assert
len
(
lyft
.
sample_annotation
)
>
0
,
\
'Error: You are trying to evaluate on the test set
\
but you do not have the annotations!'
sample_tokens
=
[]
for
sample_token
in
sample_tokens_all
:
scene_token
=
lyft
.
get
(
'sample'
,
sample_token
)[
'scene_token'
]
scene_record
=
lyft
.
get
(
'scene'
,
scene_token
)
if
scene_record
[
'name'
]
in
split_scenes
:
sample_tokens
.
append
(
sample_token
)
all_annotations
=
[]
print_log
(
'Loading ground truth annotations...'
,
logger
=
logger
)
# Load annotations and filter predictions and annotations.
for
sample_token
in
mmcv
.
track_iter_progress
(
sample_tokens
):
sample
=
lyft
.
get
(
'sample'
,
sample_token
)
sample_annotation_tokens
=
sample
[
'anns'
]
for
sample_annotation_token
in
sample_annotation_tokens
:
# Get label name in detection task and filter unused labels.
sample_annotation
=
\
lyft
.
get
(
'sample_annotation'
,
sample_annotation_token
)
detection_name
=
sample_annotation
[
'category_name'
]
if
detection_name
is
None
:
continue
annotation
=
{
'sample_token'
:
sample_token
,
'translation'
:
sample_annotation
[
'translation'
],
'size'
:
sample_annotation
[
'size'
],
'rotation'
:
sample_annotation
[
'rotation'
],
'name'
:
detection_name
,
}
all_annotations
.
append
(
annotation
)
return
all_annotations
def
load_lyft_predictions
(
res_path
):
"""Load Lyft predictions from json file.
Args:
res_path (str): Path of result json file recording detections.
Returns:
list[dict]: List of prediction dictionaries.
"""
predictions
=
mmcv
.
load
(
res_path
)
predictions
=
predictions
[
'results'
]
all_preds
=
[]
for
sample_token
in
predictions
.
keys
():
all_preds
.
extend
(
predictions
[
sample_token
])
return
all_preds
def
lyft_eval
(
lyft
,
data_root
,
res_path
,
eval_set
,
output_dir
,
logger
=
None
):
"""Evaluation API for Lyft dataset.
Args:
lyft (:obj:``LyftDataset``): Lyft class in the sdk.
data_root (str): Root of data for reading splits.
res_path (str): Path of result json file recording detections.
eval_set (str): Name of the split for evaluation.
output_dir (str): Output directory for output json files.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
Returns:
dict: The metric dictionary recording the evaluation results.
"""
# evaluate by lyft metrics
gts
=
load_lyft_gts
(
lyft
,
data_root
,
eval_set
,
logger
)
predictions
=
load_lyft_predictions
(
res_path
)
class_names
=
get_class_names
(
gts
)
print_log
(
'Evaluating...'
,
logger
=
logger
)
class_table
=
AsciiTable
([
class_names
],
title
=
'Class Names'
)
print_log
(
class_table
.
table
,
logger
=
logger
)
iou_thresholds
=
[
0.5
,
0.55
,
0.6
,
0.65
,
0.7
,
0.75
,
0.8
,
0.85
,
0.9
,
0.95
]
metrics
=
{}
average_precisions
=
\
get_classwise_aps
(
gts
,
predictions
,
class_names
,
iou_thresholds
)
APs_data
=
[[
'IOU'
,
0.5
,
0.55
,
0.6
,
0.65
,
0.7
,
0.75
,
0.8
,
0.85
,
0.9
,
0.95
]]
mAPs
=
np
.
mean
(
average_precisions
,
axis
=
0
)
mAPs_cate
=
np
.
mean
(
average_precisions
,
axis
=
1
)
final_mAP
=
np
.
mean
(
mAPs
)
metrics
[
'average_precisions'
]
=
average_precisions
.
tolist
()
metrics
[
'mAPs'
]
=
mAPs
.
tolist
()
metrics
[
'Final mAP'
]
=
float
(
final_mAP
)
metrics
[
'class_names'
]
=
class_names
metrics
[
'mAPs_cate'
]
=
mAPs_cate
.
tolist
()
APs_data
=
[[
'class'
,
'mAP@0.5:0.95'
]]
for
i
in
range
(
len
(
class_names
)):
row
=
[
class_names
[
i
],
round
(
mAPs_cate
[
i
],
3
)]
APs_data
.
append
(
row
)
APs_data
.
append
([
'Overall'
,
round
(
final_mAP
,
3
)])
APs_table
=
AsciiTable
(
APs_data
,
title
=
'mAPs@0.5:0.95'
)
print_log
(
APs_table
.
table
,
logger
=
logger
)
res_path
=
osp
.
join
(
output_dir
,
'lyft_metrics.json'
)
mmcv
.
dump
(
metrics
,
res_path
)
return
metrics
def
get_classwise_aps
(
gt
:
list
,
predictions
:
list
,
class_names
:
list
,
iou_thresholds
:
list
)
->
np
.
array
:
"""Returns an array with an average precision per class.
Note: Ground truth and predictions should have the following format.
.. code-block::
gt = [{
'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207
fbb039a550991a5149214f98cec136ac',
'translation': [974.2811881299899, 1714.6815014457964,
-23.689857123368846],
'size': [1.796, 4.488, 1.664],
'rotation': [0.14882026466054782, 0, 0, 0.9888642620837121],
'name': 'car'
}]
predictions = [{
'sample_token': '0f0e3ce89d2324d8b45aa55a7b4f8207
fbb039a550991a5149214f98cec136ac',
'translation': [971.8343488872263, 1713.6816097857359,
-25.82534357061308],
'size': [2.519726579986132, 7.810161372666739, 3.483438286096803],
'rotation': [0.10913582721095375, 0.04099572636992043,
0.01927712319721745, 1.029328402625659],
'name': 'car',
'score': 0.3077029437237213
}]
Args:
gt (list[dict]): list of dictionaries in the format described below.
predictions (list[dict]): list of dictionaries in the format
described below.
class_names (list[str]): list of the class names.
iou_thresholds (list[float]): IOU thresholds used to calculate
TP / FN
Returns:
np.ndarray: an array with an average precision per class.
"""
assert
all
([
0
<=
iou_th
<=
1
for
iou_th
in
iou_thresholds
])
gt_by_class_name
=
group_by_key
(
gt
,
'name'
)
pred_by_class_name
=
group_by_key
(
predictions
,
'name'
)
average_precisions
=
np
.
zeros
((
len
(
class_names
),
len
(
iou_thresholds
)))
for
class_id
,
class_name
in
enumerate
(
class_names
):
if
class_name
in
pred_by_class_name
:
recalls
,
precisions
,
average_precision
=
get_single_class_aps
(
gt_by_class_name
[
class_name
],
pred_by_class_name
[
class_name
],
iou_thresholds
)
average_precisions
[
class_id
,
:]
=
average_precision
return
average_precisions
def
get_single_class_aps
(
gt
,
predictions
,
iou_thresholds
):
"""Compute recall and precision for all iou thresholds.
Adapted from LyftDatasetDevkit.
Args:
gt (list[dict]): list of dictionaries in the format described above.
predictions (list[dict]): list of dictionaries in the format
described below.
iou_thresholds (list[float]): IOU thresholds used to calculate
TP / FN
Returns:
tuple[np.ndarray]: returns (recalls, precisions, average precisions)
for each class.
"""
num_gts
=
len
(
gt
)
image_gts
=
group_by_key
(
gt
,
'sample_token'
)
image_gts
=
wrap_in_box
(
image_gts
)
sample_gt_checked
=
{
sample_token
:
np
.
zeros
((
len
(
boxes
),
len
(
iou_thresholds
)))
for
sample_token
,
boxes
in
image_gts
.
items
()
}
predictions
=
sorted
(
predictions
,
key
=
lambda
x
:
x
[
'score'
],
reverse
=
True
)
# go down dets and mark TPs and FPs
num_predictions
=
len
(
predictions
)
tps
=
np
.
zeros
((
num_predictions
,
len
(
iou_thresholds
)))
fps
=
np
.
zeros
((
num_predictions
,
len
(
iou_thresholds
)))
for
prediction_index
,
prediction
in
enumerate
(
predictions
):
predicted_box
=
Box3D
(
**
prediction
)
sample_token
=
prediction
[
'sample_token'
]
max_overlap
=
-
np
.
inf
jmax
=
-
1
if
sample_token
in
image_gts
:
gt_boxes
=
image_gts
[
sample_token
]
# gt_boxes per sample
gt_checked
=
sample_gt_checked
[
sample_token
]
# gt flags per sample
else
:
gt_boxes
=
[]
gt_checked
=
None
if
len
(
gt_boxes
)
>
0
:
overlaps
=
get_ious
(
gt_boxes
,
predicted_box
)
max_overlap
=
np
.
max
(
overlaps
)
jmax
=
np
.
argmax
(
overlaps
)
for
i
,
iou_threshold
in
enumerate
(
iou_thresholds
):
if
max_overlap
>
iou_threshold
:
if
gt_checked
[
jmax
,
i
]
==
0
:
tps
[
prediction_index
,
i
]
=
1.0
gt_checked
[
jmax
,
i
]
=
1
else
:
fps
[
prediction_index
,
i
]
=
1.0
else
:
fps
[
prediction_index
,
i
]
=
1.0
# compute precision recall
fps
=
np
.
cumsum
(
fps
,
axis
=
0
)
tps
=
np
.
cumsum
(
tps
,
axis
=
0
)
recalls
=
tps
/
float
(
num_gts
)
# avoid divide by zero in case the first detection
# matches a difficult ground truth
precisions
=
tps
/
np
.
maximum
(
tps
+
fps
,
np
.
finfo
(
np
.
float64
).
eps
)
aps
=
[]
for
i
in
range
(
len
(
iou_thresholds
)):
recall
=
recalls
[:,
i
]
precision
=
precisions
[:,
i
]
assert
np
.
all
(
0
<=
recall
)
&
np
.
all
(
recall
<=
1
)
assert
np
.
all
(
0
<=
precision
)
&
np
.
all
(
precision
<=
1
)
ap
=
get_ap
(
recall
,
precision
)
aps
.
append
(
ap
)
aps
=
np
.
array
(
aps
)
return
recalls
,
precisions
,
aps
mmdet3d/datasets/__init__.py
View file @
98cfb2ee
...
@@ -2,6 +2,7 @@ from mmdet.datasets.builder import DATASETS, build_dataloader, build_dataset
...
@@ -2,6 +2,7 @@ from mmdet.datasets.builder import DATASETS, build_dataloader, build_dataset
from
.custom_3d
import
Custom3DDataset
from
.custom_3d
import
Custom3DDataset
from
.kitti2d_dataset
import
Kitti2DDataset
from
.kitti2d_dataset
import
Kitti2DDataset
from
.kitti_dataset
import
KittiDataset
from
.kitti_dataset
import
KittiDataset
from
.lyft_dataset
import
LyftDataset
from
.nuscenes_dataset
import
NuScenesDataset
from
.nuscenes_dataset
import
NuScenesDataset
from
.pipelines
import
(
GlobalRotScaleTrans
,
IndoorPointSample
,
from
.pipelines
import
(
GlobalRotScaleTrans
,
IndoorPointSample
,
LoadAnnotations3D
,
LoadPointsFromFile
,
LoadAnnotations3D
,
LoadPointsFromFile
,
...
@@ -14,9 +15,9 @@ from .sunrgbd_dataset import SUNRGBDDataset
...
@@ -14,9 +15,9 @@ from .sunrgbd_dataset import SUNRGBDDataset
__all__
=
[
__all__
=
[
'KittiDataset'
,
'GroupSampler'
,
'DistributedGroupSampler'
,
'KittiDataset'
,
'GroupSampler'
,
'DistributedGroupSampler'
,
'build_dataloader'
,
'RepeatFactorDataset'
,
'DATASETS'
,
'build_dataset'
,
'build_dataloader'
,
'RepeatFactorDataset'
,
'DATASETS'
,
'build_dataset'
,
'CocoDataset'
,
'Kitti2DDataset'
,
'NuScenesDataset'
,
'
ObjectSample
'
,
'CocoDataset'
,
'Kitti2DDataset'
,
'NuScenesDataset'
,
'
LyftDataset
'
,
'RandomFlip3D'
,
'ObjectNoise'
,
'GlobalRotScaleTrans'
,
'PointShuffle'
,
'ObjectSample'
,
'RandomFlip3D'
,
'ObjectNoise'
,
'GlobalRotScaleTrans'
,
'ObjectRangeFilter'
,
'PointsRangeFilter'
,
'Collect3D'
,
'PointShuffle'
,
'ObjectRangeFilter'
,
'PointsRangeFilter'
,
'Collect3D'
,
'LoadPointsFromFile'
,
'NormalizePointsColor'
,
'IndoorPointSample'
,
'LoadPointsFromFile'
,
'NormalizePointsColor'
,
'IndoorPointSample'
,
'LoadAnnotations3D'
,
'SUNRGBDDataset'
,
'ScanNetDataset'
,
'Custom3DDataset'
'LoadAnnotations3D'
,
'SUNRGBDDataset'
,
'ScanNetDataset'
,
'Custom3DDataset'
]
]
mmdet3d/datasets/custom_3d.py
View file @
98cfb2ee
...
@@ -24,12 +24,13 @@ class Custom3DDataset(Dataset):
...
@@ -24,12 +24,13 @@ class Custom3DDataset(Dataset):
Defaults to None.
Defaults to None.
classes (tuple[str], optional): Classes used in the dataset.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
Defaults to None.
modality (
[
dict
]
, optional): Modality to specify the sensor data used
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR'. Available options includes
Defaults to 'LiDAR'. Available options includes
- 'LiDAR': box in LiDAR coordinates
- 'LiDAR': box in LiDAR coordinates
- 'Depth': box in depth coordinates, usually for indoor dataset
- 'Depth': box in depth coordinates, usually for indoor dataset
- 'Camera': box in camera coordinates
- 'Camera': box in camera coordinates
...
...
mmdet3d/datasets/kitti_dataset.py
View file @
98cfb2ee
...
@@ -16,7 +16,40 @@ from .custom_3d import Custom3DDataset
...
@@ -16,7 +16,40 @@ from .custom_3d import Custom3DDataset
@
DATASETS
.
register_module
()
@
DATASETS
.
register_module
()
class
KittiDataset
(
Custom3DDataset
):
class
KittiDataset
(
Custom3DDataset
):
"""KITTI Dataset
This class serves as the API for experiments on the KITTI Dataset.
Please refer to `<http://www.cvlibs.net/datasets/kitti/eval_object.php?
obj_benchmark=3d>`_for data downloading. It is recommended to symlink
the dataset root to $MMDETECTION3D/data and organize them as the doc
shows.
Args:
data_root (str): Path of dataset root.
ann_file (str): Path of annotation file.
split (str): Split of input data.
pts_prefix (str, optional): Prefix of points files.
Defaults to 'velodyne'.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR' in this dataset. Available options includes
- 'LiDAR': box in LiDAR coordinates
- 'Depth': box in depth coordinates, usually for indoor dataset
- 'Camera': box in camera coordinates
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
"""
CLASSES
=
(
'car'
,
'pedestrian'
,
'cyclist'
)
CLASSES
=
(
'car'
,
'pedestrian'
,
'cyclist'
)
def
__init__
(
self
,
def
__init__
(
self
,
...
@@ -189,7 +222,7 @@ class KittiDataset(Custom3DDataset):
...
@@ -189,7 +222,7 @@ class KittiDataset(Custom3DDataset):
"""Evaluation in KITTI protocol.
"""Evaluation in KITTI protocol.
Args:
Args:
results (list): Testing results of the dataset.
results (list
[dict]
): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated.
metric (str | list[str]): Metrics to be evaluated.
logger (logging.Logger | str | None): Logger used for printing
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
related information during evaluation. Default: None.
...
@@ -352,7 +385,8 @@ class KittiDataset(Custom3DDataset):
...
@@ -352,7 +385,8 @@ class KittiDataset(Custom3DDataset):
"""Convert results to kitti format for evaluation and test submission.
"""Convert results to kitti format for evaluation and test submission.
Args:
Args:
net_outputs (List[array]): list of array storing the bbox and score
net_outputs (List[np.ndarray]): list of array storing the
bbox and score
class_nanes (List[String]): A list of class names
class_nanes (List[String]): A list of class names
pklfile_prefix (str | None): The prefix of pkl file.
pklfile_prefix (str | None): The prefix of pkl file.
submission_prefix (str | None): The prefix of submission file.
submission_prefix (str | None): The prefix of submission file.
...
...
mmdet3d/datasets/lyft_dataset.py
0 → 100644
View file @
98cfb2ee
import
os.path
as
osp
import
tempfile
import
mmcv
import
numpy
as
np
import
pandas
as
pd
from
lyft_dataset_sdk.lyftdataset
import
LyftDataset
as
Lyft
from
lyft_dataset_sdk.utils.data_classes
import
Box
as
LyftBox
from
pyquaternion
import
Quaternion
from
mmdet3d.core.evaluation.lyft_eval
import
lyft_eval
from
mmdet.datasets
import
DATASETS
from
..core.bbox
import
LiDARInstance3DBoxes
from
.custom_3d
import
Custom3DDataset
@
DATASETS
.
register_module
()
class
LyftDataset
(
Custom3DDataset
):
"""Lyft Dataset
This class serves as the API for experiments on the Lyft Dataset.
Please refer to
`<https://www.kaggle.com/c/3d-object-detection-for-autonomous-vehicles
/data>`_for data downloading. It is recommended to symlink the dataset
root to $MMDETECTION3D/data and organize them as the doc shows.
Args:
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
data_root (str): Path of dataset root.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
load_interval (int, optional): Interval of loading the dataset. It is
used to uniformly sample the dataset. Defaults to 1.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR' in this dataset. Available options includes
- 'LiDAR': box in LiDAR coordinates
- 'Depth': box in depth coordinates, usually for indoor dataset
- 'Camera': box in camera coordinates
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
"""
NameMapping
=
{
'bicycle'
:
'bicycle'
,
'bus'
:
'bus'
,
'car'
:
'car'
,
'emergency_vehicle'
:
'emergency_vehicle'
,
'motorcycle'
:
'motorcycle'
,
'other_vehicle'
:
'other_vehicle'
,
'pedestrian'
:
'pedestrian'
,
'truck'
:
'truck'
,
'animal'
:
'animal'
}
DefaultAttribute
=
{
'car'
:
'is_stationary'
,
'truck'
:
'is_stationary'
,
'bus'
:
'is_stationary'
,
'emergency_vehicle'
:
'is_stationary'
,
'other_vehicle'
:
'is_stationary'
,
'motorcycle'
:
'is_stationary'
,
'bicycle'
:
'is_stationary'
,
'pedestrian'
:
'is_stationary'
,
'animal'
:
'is_stationary'
}
CLASSES
=
(
'car'
,
'truck'
,
'bus'
,
'emergency_vehicle'
,
'other_vehicle'
,
'motorcycle'
,
'bicycle'
,
'pedestrian'
,
'animal'
)
def
__init__
(
self
,
ann_file
,
pipeline
=
None
,
data_root
=
None
,
classes
=
None
,
load_interval
=
1
,
modality
=
None
,
box_type_3d
=
'LiDAR'
,
filter_empty_gt
=
True
,
test_mode
=
False
):
self
.
load_interval
=
load_interval
super
().
__init__
(
data_root
=
data_root
,
ann_file
=
ann_file
,
pipeline
=
pipeline
,
classes
=
classes
,
modality
=
modality
,
box_type_3d
=
box_type_3d
,
filter_empty_gt
=
filter_empty_gt
,
test_mode
=
test_mode
)
if
self
.
modality
is
None
:
self
.
modality
=
dict
(
use_camera
=
False
,
use_lidar
=
True
,
use_radar
=
False
,
use_map
=
False
,
use_external
=
False
,
)
def
load_annotations
(
self
,
ann_file
):
"""Load annotations from ann_file.
Args:
ann_file (str): Path of the annotation file.
Returns:
list[dict]: List of annotations sorted by timestamps.
"""
data
=
mmcv
.
load
(
ann_file
)
data_infos
=
list
(
sorted
(
data
[
'infos'
],
key
=
lambda
e
:
e
[
'timestamp'
]))
data_infos
=
data_infos
[::
self
.
load_interval
]
self
.
metadata
=
data
[
'metadata'
]
self
.
version
=
self
.
metadata
[
'version'
]
return
data_infos
def
get_data_info
(
self
,
index
):
"""Get data info according to the given index.
Args:
index (int): Index of the sample data to get.
Returns:
dict: Standard input_dict consists of the
data information.
- sample_idx (str): sample index
- pts_filename (str): filename of point clouds
- sweeps (list[dict]): infos of sweeps
- timestamp (float): sample timestamp
- img_filename (str, optional): image filename
- lidar2img (list[np.ndarray], optional): transformations from
lidar to different cameras
- ann_info (dict): annotation info
"""
info
=
self
.
data_infos
[
index
]
# standard protocal modified from SECOND.Pytorch
input_dict
=
dict
(
sample_idx
=
info
[
'token'
],
pts_filename
=
info
[
'lidar_path'
],
sweeps
=
info
[
'sweeps'
],
timestamp
=
info
[
'timestamp'
]
/
1e6
,
)
if
self
.
modality
[
'use_camera'
]:
image_paths
=
[]
lidar2img_rts
=
[]
for
cam_type
,
cam_info
in
info
[
'cams'
].
items
():
image_paths
.
append
(
cam_info
[
'data_path'
])
# obtain lidar to image transformation matrix
lidar2cam_r
=
np
.
linalg
.
inv
(
cam_info
[
'sensor2lidar_rotation'
])
lidar2cam_t
=
cam_info
[
'sensor2lidar_translation'
]
@
lidar2cam_r
.
T
lidar2cam_rt
=
np
.
eye
(
4
)
lidar2cam_rt
[:
3
,
:
3
]
=
lidar2cam_r
.
T
lidar2cam_rt
[
3
,
:
3
]
=
-
lidar2cam_t
intrinsic
=
cam_info
[
'cam_intrinsic'
]
viewpad
=
np
.
eye
(
4
)
viewpad
[:
intrinsic
.
shape
[
0
],
:
intrinsic
.
shape
[
1
]]
=
intrinsic
lidar2img_rt
=
(
viewpad
@
lidar2cam_rt
.
T
)
lidar2img_rts
.
append
(
lidar2img_rt
)
input_dict
.
update
(
dict
(
img_filename
=
image_paths
,
lidar2img
=
lidar2img_rts
,
))
if
not
self
.
test_mode
:
annos
=
self
.
get_ann_info
(
index
)
input_dict
[
'ann_info'
]
=
annos
return
input_dict
def
get_ann_info
(
self
,
index
):
"""Get annotation info according to the given index.
Args:
index (int): Index of the annotation data to get.
Returns:
dict: Standard annotation dictionary
consists of the data information.
- gt_bboxes_3d (:obj:``LiDARInstance3DBoxes``):
3D ground truth bboxes
- gt_labels_3d (np.ndarray): labels of ground truths
- gt_names (list[str]): class names of ground truths
"""
info
=
self
.
data_infos
[
index
]
gt_bboxes_3d
=
info
[
'gt_boxes'
]
gt_names_3d
=
info
[
'gt_names'
]
gt_labels_3d
=
[]
for
cat
in
gt_names_3d
:
if
cat
in
self
.
CLASSES
:
gt_labels_3d
.
append
(
self
.
CLASSES
.
index
(
cat
))
else
:
gt_labels_3d
.
append
(
-
1
)
gt_labels_3d
=
np
.
array
(
gt_labels_3d
)
if
'gt_shape'
in
info
:
gt_shape
=
info
[
'gt_shape'
]
gt_bboxes_3d
=
np
.
concatenate
([
gt_bboxes_3d
,
gt_shape
],
axis
=-
1
)
# the lyft box center is [0.5, 0.5, 0.5], we change it to be
# the same as KITTI (0.5, 0.5, 0)
gt_bboxes_3d
=
LiDARInstance3DBoxes
(
gt_bboxes_3d
,
box_dim
=
gt_bboxes_3d
.
shape
[
-
1
],
origin
=
(
0.5
,
0.5
,
0.5
)).
convert_to
(
self
.
box_mode_3d
)
anns_results
=
dict
(
gt_bboxes_3d
=
gt_bboxes_3d
,
gt_labels_3d
=
gt_labels_3d
,
)
return
anns_results
def
_format_bbox
(
self
,
results
,
jsonfile_prefix
=
None
):
"""Convert the results to the standard format.
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str): The prefix of the output jsonfile.
You can specify the output directory/filename by
modifying the jsonfile_prefix. Default: None.
Returns:
str: Path of the output json file.
"""
lyft_annos
=
{}
mapped_class_names
=
self
.
CLASSES
print
(
'Start to convert detection format...'
)
for
sample_id
,
det
in
enumerate
(
mmcv
.
track_iter_progress
(
results
)):
annos
=
[]
boxes
=
output_to_lyft_box
(
det
)
sample_token
=
self
.
data_infos
[
sample_id
][
'token'
]
boxes
=
lidar_lyft_box_to_global
(
self
.
data_infos
[
sample_id
],
boxes
)
for
i
,
box
in
enumerate
(
boxes
):
name
=
mapped_class_names
[
box
.
label
]
lyft_anno
=
dict
(
sample_token
=
sample_token
,
translation
=
box
.
center
.
tolist
(),
size
=
box
.
wlh
.
tolist
(),
rotation
=
box
.
orientation
.
elements
.
tolist
(),
name
=
name
,
score
=
box
.
score
)
annos
.
append
(
lyft_anno
)
lyft_annos
[
sample_token
]
=
annos
lyft_submissions
=
{
'meta'
:
self
.
modality
,
'results'
:
lyft_annos
,
}
mmcv
.
mkdir_or_exist
(
jsonfile_prefix
)
res_path
=
osp
.
join
(
jsonfile_prefix
,
'results_lyft.json'
)
print
(
'Results writes to'
,
res_path
)
mmcv
.
dump
(
lyft_submissions
,
res_path
)
return
res_path
def
_evaluate_single
(
self
,
result_path
,
logger
=
None
,
metric
=
'bbox'
,
result_name
=
'pts_bbox'
):
"""Evaluation for a single model in Lyft protocol.
Args:
result_path (str): Path of the result file.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
metric (str): Metric name used for evaluation. Default: 'bbox'.
result_name (str): Result name in the metric prefix.
Default: 'pts_bbox'.
Returns:
dict: Dictionary of evaluation details.
"""
output_dir
=
osp
.
join
(
*
osp
.
split
(
result_path
)[:
-
1
])
lyft
=
Lyft
(
data_path
=
osp
.
join
(
self
.
data_root
,
self
.
version
),
json_path
=
osp
.
join
(
self
.
data_root
,
self
.
version
,
self
.
version
),
verbose
=
True
)
eval_set_map
=
{
'v1.01-train'
:
'val'
,
}
metrics
=
lyft_eval
(
lyft
,
self
.
data_root
,
result_path
,
eval_set_map
[
self
.
version
],
output_dir
,
logger
)
# record metrics
detail
=
dict
()
metric_prefix
=
f
'
{
result_name
}
_Lyft'
for
i
,
name
in
enumerate
(
metrics
[
'class_names'
]):
AP
=
float
(
"f{round(metrics['mAPs_cate'][i], 3)}"
)
detail
[
f
'
{
metric_prefix
}
/
{
name
}
_AP'
]
=
AP
detail
[
f
'
{
metric_prefix
}
/mAP'
]
=
metrics
[
'Final mAP'
]
return
detail
def
format_results
(
self
,
results
,
jsonfile_prefix
=
None
,
csv_savepath
=
None
):
"""Format the results to json (standard format for COCO evaluation).
Args:
results (list[dict]): Testing results of the dataset.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
csv_savepath (str | None): The path for saving csv files.
It includes the file path and the csv filename,
e.g., "a/b/filename.csv". If not specified,
the result will not be converted to csv file.
Returns:
tuple (dict, str): result_files is a dict containing the json
filepaths, tmp_dir is the temporal directory created for
saving json files when jsonfile_prefix is not specified.
"""
assert
isinstance
(
results
,
list
),
'results must be a list'
assert
len
(
results
)
==
len
(
self
),
(
'The length of results is not equal to the dataset len: {} != {}'
.
format
(
len
(
results
),
len
(
self
)))
if
jsonfile_prefix
is
None
:
tmp_dir
=
tempfile
.
TemporaryDirectory
()
jsonfile_prefix
=
osp
.
join
(
tmp_dir
.
name
,
'results'
)
else
:
tmp_dir
=
None
if
not
isinstance
(
results
[
0
],
dict
):
result_files
=
self
.
_format_bbox
(
results
,
jsonfile_prefix
)
else
:
result_files
=
dict
()
for
name
in
results
[
0
]:
print
(
f
'
\n
Formating bboxes of
{
name
}
'
)
results_
=
[
out
[
name
]
for
out
in
results
]
tmp_file_
=
osp
.
join
(
jsonfile_prefix
,
name
)
result_files
.
update
(
{
name
:
self
.
_format_bbox
(
results_
,
tmp_file_
)})
if
csv_savepath
is
not
None
:
self
.
json2csv
(
result_files
[
'pts_bbox'
],
csv_savepath
)
return
result_files
,
tmp_dir
def
evaluate
(
self
,
results
,
metric
=
'bbox'
,
logger
=
None
,
jsonfile_prefix
=
None
,
csv_savepath
=
None
,
result_names
=
[
'pts_bbox'
]):
"""Evaluation in Lyft protocol.
Args:
results (list[dict]): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated.
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
csv_savepath (str | None): The path for saving csv files.
It includes the file path and the csv filename,
e.g., "a/b/filename.csv". If not specified,
the result will not be converted to csv file.
Returns:
dict[str: float]
"""
result_files
,
tmp_dir
=
self
.
format_results
(
results
,
jsonfile_prefix
,
csv_savepath
)
if
isinstance
(
result_files
,
dict
):
results_dict
=
dict
()
for
name
in
result_names
:
print
(
f
'Evaluating bboxes of
{
name
}
'
)
ret_dict
=
self
.
_evaluate_single
(
result_files
[
name
])
results_dict
.
update
(
ret_dict
)
elif
isinstance
(
result_files
,
str
):
results_dict
=
self
.
_evaluate_single
(
result_files
)
if
tmp_dir
is
not
None
:
tmp_dir
.
cleanup
()
return
results_dict
@
staticmethod
def
json2csv
(
json_path
,
csv_savepath
):
"""Convert the json file to csv format for submission.
Args:
json_path (str): Path of the result json file.
csv_savepath (str): Path to save the csv file.
"""
with
open
(
json_path
,
'r'
)
as
f
:
results
=
mmcv
.
load
(
f
)[
'results'
]
csv_nopred
=
'data/lyft/sample_submission.csv'
data
=
pd
.
read_csv
(
csv_nopred
)
Id_list
=
list
(
data
[
'Id'
])
pred_list
=
list
(
data
[
'PredictionString'
])
cnt
=
0
print
(
'Converting the json to csv...'
)
for
token
in
results
.
keys
():
cnt
+=
1
predictions
=
results
[
token
]
prediction_str
=
''
for
i
in
range
(
len
(
predictions
)):
prediction_str
+=
\
str
(
predictions
[
i
][
'score'
])
+
' '
+
\
str
(
predictions
[
i
][
'translation'
][
0
])
+
' '
+
\
str
(
predictions
[
i
][
'translation'
][
1
])
+
' '
+
\
str
(
predictions
[
i
][
'translation'
][
2
])
+
' '
+
\
str
(
predictions
[
i
][
'size'
][
0
])
+
' '
+
\
str
(
predictions
[
i
][
'size'
][
1
])
+
' '
+
\
str
(
predictions
[
i
][
'size'
][
2
])
+
' '
+
\
str
(
Quaternion
(
list
(
predictions
[
i
][
'rotation'
]))
.
yaw_pitch_roll
[
0
])
+
' '
+
\
predictions
[
i
][
'name'
]
+
' '
prediction_str
=
prediction_str
[:
-
1
]
idx
=
Id_list
.
index
(
token
)
pred_list
[
idx
]
=
prediction_str
df
=
pd
.
DataFrame
({
'Id'
:
Id_list
,
'PredictionString'
:
pred_list
})
df
.
to_csv
(
csv_savepath
,
index
=
False
)
def
output_to_lyft_box
(
detection
):
"""Convert the output to the box class in the Lyft.
Args:
detection (dict): Detection results.
Returns:
list[:obj:``LyftBox``]: List of standard LyftBoxes.
"""
box3d
=
detection
[
'boxes_3d'
]
scores
=
detection
[
'scores_3d'
].
numpy
()
labels
=
detection
[
'labels_3d'
].
numpy
()
box_gravity_center
=
box3d
.
gravity_center
.
numpy
()
box_dims
=
box3d
.
dims
.
numpy
()
box_yaw
=
box3d
.
yaw
.
numpy
()
# TODO: check whether this is necessary
# with dir_offset & dir_limit in the head
box_yaw
=
-
box_yaw
-
np
.
pi
/
2
box_list
=
[]
for
i
in
range
(
len
(
box3d
)):
quat
=
Quaternion
(
axis
=
[
0
,
0
,
1
],
radians
=
box_yaw
[
i
])
box
=
LyftBox
(
box_gravity_center
[
i
],
box_dims
[
i
],
quat
,
label
=
labels
[
i
],
score
=
scores
[
i
])
box_list
.
append
(
box
)
return
box_list
def
lidar_lyft_box_to_global
(
info
,
boxes
):
"""Convert the box from ego to global coordinate.
Args:
info (dict): Info for a specific sample data, including the
calibration information.
boxes (list[:obj:``LyftBox``]): List of predicted LyftBoxes.
Returns:
list: List of standard LyftBoxes in the global
coordinate.
"""
box_list
=
[]
for
box
in
boxes
:
# Move box to ego vehicle coord system
box
.
rotate
(
Quaternion
(
info
[
'lidar2ego_rotation'
]))
box
.
translate
(
np
.
array
(
info
[
'lidar2ego_translation'
]))
# Move box to global coord system
box
.
rotate
(
Quaternion
(
info
[
'ego2global_rotation'
]))
box
.
translate
(
np
.
array
(
info
[
'ego2global_translation'
]))
box_list
.
append
(
box
)
return
box_list
mmdet3d/datasets/nuscenes_dataset.py
View file @
98cfb2ee
...
@@ -14,6 +14,42 @@ from .custom_3d import Custom3DDataset
...
@@ -14,6 +14,42 @@ from .custom_3d import Custom3DDataset
@
DATASETS
.
register_module
()
@
DATASETS
.
register_module
()
class
NuScenesDataset
(
Custom3DDataset
):
class
NuScenesDataset
(
Custom3DDataset
):
"""NuScenes Dataset
This class serves as the API for experiments on the NuScenes Dataset.
Please refer to `<https://www.nuscenes.org/download>`_for data
downloading. It is recommended to symlink the dataset root to
$MMDETECTION3D/data and organize them as the doc shows.
Args:
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
data_root (str): Path of dataset root.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
load_interval (int, optional): Interval of loading the dataset. It is
used to uniformly sample the dataset. Defaults to 1.
with_velocity (bool, optional): Whether include velocity prediction
into the experiments. Defaults to True.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'LiDAR' in this dataset. Available options includes
- 'LiDAR': box in LiDAR coordinates
- 'Depth': box in depth coordinates, usually for indoor dataset
- 'Camera': box in camera coordinates
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
eval_version (bool, optional): Configuration version of evaluation.
Defaults to 'detection_cvpr_2019'.
"""
NameMapping
=
{
NameMapping
=
{
'movable_object.barrier'
:
'barrier'
,
'movable_object.barrier'
:
'barrier'
,
'vehicle.bicycle'
:
'bicycle'
,
'vehicle.bicycle'
:
'bicycle'
,
...
@@ -172,7 +208,7 @@ class NuScenesDataset(Custom3DDataset):
...
@@ -172,7 +208,7 @@ class NuScenesDataset(Custom3DDataset):
gt_velocity
[
nan_mask
]
=
[
0.0
,
0.0
]
gt_velocity
[
nan_mask
]
=
[
0.0
,
0.0
]
gt_bboxes_3d
=
np
.
concatenate
([
gt_bboxes_3d
,
gt_velocity
],
axis
=-
1
)
gt_bboxes_3d
=
np
.
concatenate
([
gt_bboxes_3d
,
gt_velocity
],
axis
=-
1
)
# the nuscenes box center is [0.5, 0.5, 0.5], we
keep it
# the nuscenes box center is [0.5, 0.5, 0.5], we
change it to be
# the same as KITTI (0.5, 0.5, 0)
# the same as KITTI (0.5, 0.5, 0)
gt_bboxes_3d
=
LiDARInstance3DBoxes
(
gt_bboxes_3d
=
LiDARInstance3DBoxes
(
gt_bboxes_3d
,
gt_bboxes_3d
,
...
@@ -270,7 +306,7 @@ class NuScenesDataset(Custom3DDataset):
...
@@ -270,7 +306,7 @@ class NuScenesDataset(Custom3DDataset):
# record metrics
# record metrics
metrics
=
mmcv
.
load
(
osp
.
join
(
output_dir
,
'metrics_summary.json'
))
metrics
=
mmcv
.
load
(
osp
.
join
(
output_dir
,
'metrics_summary.json'
))
detail
=
dict
()
detail
=
dict
()
metric_prefix
=
'{}_NuScenes'
.
format
(
result_name
)
metric_prefix
=
f
'
{
result_name
}
_NuScenes'
for
name
in
self
.
CLASSES
:
for
name
in
self
.
CLASSES
:
for
k
,
v
in
metrics
[
'label_aps'
][
name
].
items
():
for
k
,
v
in
metrics
[
'label_aps'
][
name
].
items
():
val
=
float
(
'{:.4f}'
.
format
(
v
))
val
=
float
(
'{:.4f}'
.
format
(
v
))
...
@@ -287,15 +323,15 @@ class NuScenesDataset(Custom3DDataset):
...
@@ -287,15 +323,15 @@ class NuScenesDataset(Custom3DDataset):
"""Format the results to json (standard format for COCO evaluation).
"""Format the results to json (standard format for COCO evaluation).
Args:
Args:
results (list): Testing results of the dataset.
results (list
[dict]
): Testing results of the dataset.
jsonfile_prefix (str | None): The prefix of json files. It includes
jsonfile_prefix (str | None): The prefix of json files. It includes
the file path and the prefix of filename, e.g., "a/b/prefix".
the file path and the prefix of filename, e.g., "a/b/prefix".
If not specified, a temp file will be created. Default: None.
If not specified, a temp file will be created. Default: None.
Returns:
Returns:
tuple
:
(
result_files, tmp_di
r)
,
result_files is a dict containing
tuple (
dict, st
r)
:
result_files is a dict containing
the json
the json
filepaths, tmp_dir is the temporal directory created
filepaths, tmp_dir is the temporal directory created
for
for
saving json files when jsonfile_prefix is not specified.
saving json files when jsonfile_prefix is not specified.
"""
"""
assert
isinstance
(
results
,
list
),
'results must be a list'
assert
isinstance
(
results
,
list
),
'results must be a list'
assert
len
(
results
)
==
len
(
self
),
(
assert
len
(
results
)
==
len
(
self
),
(
...
@@ -331,7 +367,7 @@ class NuScenesDataset(Custom3DDataset):
...
@@ -331,7 +367,7 @@ class NuScenesDataset(Custom3DDataset):
"""Evaluation in nuScenes protocol.
"""Evaluation in nuScenes protocol.
Args:
Args:
results (list): Testing results of the dataset.
results (list
[dict]
): Testing results of the dataset.
metric (str | list[str]): Metrics to be evaluated.
metric (str | list[str]): Metrics to be evaluated.
logger (logging.Logger | str | None): Logger used for printing
logger (logging.Logger | str | None): Logger used for printing
related information during evaluation. Default: None.
related information during evaluation. Default: None.
...
...
mmdet3d/datasets/scannet_dataset.py
View file @
98cfb2ee
...
@@ -10,7 +10,36 @@ from .custom_3d import Custom3DDataset
...
@@ -10,7 +10,36 @@ from .custom_3d import Custom3DDataset
@
DATASETS
.
register_module
()
@
DATASETS
.
register_module
()
class
ScanNetDataset
(
Custom3DDataset
):
class
ScanNetDataset
(
Custom3DDataset
):
"""ScanNet Dataset
This class serves as the API for experiments on the ScanNet Dataset.
Please refer to `<https://github.com/ScanNet/ScanNet>`_for data
downloading. It is recommended to symlink the dataset root to
$MMDETECTION3D/data and organize them as the doc shows.
Args:
data_root (str): Path of dataset root.
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'Depth' in this dataset. Available options includes
- 'LiDAR': box in LiDAR coordinates
- 'Depth': box in depth coordinates, usually for indoor dataset
- 'Camera': box in camera coordinates
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
"""
CLASSES
=
(
'cabinet'
,
'bed'
,
'chair'
,
'sofa'
,
'table'
,
'door'
,
'window'
,
CLASSES
=
(
'cabinet'
,
'bed'
,
'chair'
,
'sofa'
,
'table'
,
'door'
,
'window'
,
'bookshelf'
,
'picture'
,
'counter'
,
'desk'
,
'curtain'
,
'bookshelf'
,
'picture'
,
'counter'
,
'desk'
,
'curtain'
,
'refrigerator'
,
'showercurtrain'
,
'toilet'
,
'sink'
,
'bathtub'
,
'refrigerator'
,
'showercurtrain'
,
'toilet'
,
'sink'
,
'bathtub'
,
...
...
mmdet3d/datasets/sunrgbd_dataset.py
View file @
98cfb2ee
...
@@ -10,7 +10,36 @@ from .custom_3d import Custom3DDataset
...
@@ -10,7 +10,36 @@ from .custom_3d import Custom3DDataset
@
DATASETS
.
register_module
()
@
DATASETS
.
register_module
()
class
SUNRGBDDataset
(
Custom3DDataset
):
class
SUNRGBDDataset
(
Custom3DDataset
):
"""SUNRGBD Dataset
This class serves as the API for experiments on the SUNRGBD Dataset.
Please refer to `<http://rgbd.cs.princeton.edu/challenge.html>`_for
data downloading. It is recommended to symlink the dataset root to
$MMDETECTION3D/data and organize them as the doc shows.
Args:
data_root (str): Path of dataset root.
ann_file (str): Path of annotation file.
pipeline (list[dict], optional): Pipeline used for data processing.
Defaults to None.
classes (tuple[str], optional): Classes used in the dataset.
Defaults to None.
modality (dict, optional): Modality to specify the sensor data used
as input. Defaults to None.
box_type_3d (str, optional): Type of 3D box of this dataset.
Based on the `box_type_3d`, the dataset will encapsulate the box
to its original format then converted them to `box_type_3d`.
Defaults to 'Depth' in this dataset. Available options includes
- 'LiDAR': box in LiDAR coordinates
- 'Depth': box in depth coordinates, usually for indoor dataset
- 'Camera': box in camera coordinates
filter_empty_gt (bool, optional): Whether to filter empty GT.
Defaults to True.
test_mode (bool, optional): Whether the dataset is in test mode.
Defaults to False.
"""
CLASSES
=
(
'bed'
,
'table'
,
'sofa'
,
'chair'
,
'toilet'
,
'desk'
,
'dresser'
,
CLASSES
=
(
'bed'
,
'table'
,
'sofa'
,
'chair'
,
'toilet'
,
'desk'
,
'dresser'
,
'night_stand'
,
'bookshelf'
,
'bathtub'
)
'night_stand'
,
'bookshelf'
,
'bathtub'
)
...
...
requirements/runtime.txt
View file @
98cfb2ee
...
@@ -5,6 +5,7 @@ mmcv>=0.6.0
...
@@ -5,6 +5,7 @@ mmcv>=0.6.0
numba==0.48.0
numba==0.48.0
numpy
numpy
nuscenes-devkit==1.0.5
nuscenes-devkit==1.0.5
lyft_dataset_sdk
# need older pillow until torchvision is fixed
# need older pillow until torchvision is fixed
Pillow<=6.2.2
Pillow<=6.2.2
plyfile
plyfile
...
...
tools/create_data.py
View file @
98cfb2ee
...
@@ -3,18 +3,30 @@ import os.path as osp
...
@@ -3,18 +3,30 @@ import os.path as osp
import
tools.data_converter.indoor_converter
as
indoor
import
tools.data_converter.indoor_converter
as
indoor
import
tools.data_converter.kitti_converter
as
kitti
import
tools.data_converter.kitti_converter
as
kitti
import
tools.data_converter.lyft_converter
as
lyft_converter
import
tools.data_converter.nuscenes_converter
as
nuscenes_converter
import
tools.data_converter.nuscenes_converter
as
nuscenes_converter
from
tools.data_converter.create_gt_database
import
create_groundtruth_database
from
tools.data_converter.create_gt_database
import
create_groundtruth_database
def
kitti_data_prep
(
root_path
,
info_prefix
,
version
,
out_dir
):
def
kitti_data_prep
(
root_path
,
info_prefix
,
version
,
out_dir
):
"""Prepare data related to Kitti dataset.
Related data consists of '.pkl' files recording basic infos,
2D annotations and groundtruth database.
Args:
root_path (str): Path of dataset root.
info_prefix (str): The prefix of info filenames.
version (str): Dataset version.
out_dir (str): Output directory of the groundtruth database info.
"""
kitti
.
create_kitti_info_file
(
root_path
,
info_prefix
)
kitti
.
create_kitti_info_file
(
root_path
,
info_prefix
)
kitti
.
create_reduced_point_cloud
(
root_path
,
info_prefix
)
kitti
.
create_reduced_point_cloud
(
root_path
,
info_prefix
)
create_groundtruth_database
(
create_groundtruth_database
(
'KittiDataset'
,
'KittiDataset'
,
root_path
,
root_path
,
info_prefix
,
info_prefix
,
'{
}/{}_infos_train.pkl'
.
format
(
out_dir
,
info_prefix
)
,
f
'
{
out_dir
}
/
{
info_prefix
}
_infos_train.pkl'
,
relative_path
=
False
,
relative_path
=
False
,
mask_anno_path
=
'instances_train.json'
,
mask_anno_path
=
'instances_train.json'
,
with_mask
=
(
version
==
'mask'
))
with_mask
=
(
version
==
'mask'
))
...
@@ -26,30 +38,97 @@ def nuscenes_data_prep(root_path,
...
@@ -26,30 +38,97 @@ def nuscenes_data_prep(root_path,
dataset_name
,
dataset_name
,
out_dir
,
out_dir
,
max_sweeps
=
10
):
max_sweeps
=
10
):
"""Prepare data related to nuScenes dataset.
Related data consists of '.pkl' files recording basic infos,
2D annotations and groundtruth database.
Args:
root_path (str): Path of dataset root.
info_prefix (str): The prefix of info filenames.
version (str): Dataset version.
dataset_name (str): The dataset class name.
out_dir (str): Output directory of the groundtruth database info.
max_sweeps (int): Number of input consecutive frames. Default: 10
"""
nuscenes_converter
.
create_nuscenes_infos
(
nuscenes_converter
.
create_nuscenes_infos
(
root_path
,
info_prefix
,
version
=
version
,
max_sweeps
=
max_sweeps
)
root_path
,
info_prefix
,
version
=
version
,
max_sweeps
=
max_sweeps
)
if
version
==
'v1.0-test'
:
if
version
==
'v1.0-test'
:
return
return
info_train_path
=
osp
.
join
(
root_path
,
info_train_path
=
osp
.
join
(
root_path
,
f
'
{
info_prefix
}
_infos_train.pkl'
)
'{}_infos_train.pkl'
.
format
(
info_prefix
))
info_val_path
=
osp
.
join
(
root_path
,
f
'
{
info_prefix
}
_infos_val.pkl'
)
info_val_path
=
osp
.
join
(
root_path
,
'{}_infos_val.pkl'
.
format
(
info_prefix
))
nuscenes_converter
.
export_2d_annotation
(
nuscenes_converter
.
export_2d_annotation
(
root_path
,
info_train_path
,
version
=
version
)
root_path
,
info_train_path
,
version
=
version
)
nuscenes_converter
.
export_2d_annotation
(
nuscenes_converter
.
export_2d_annotation
(
root_path
,
info_val_path
,
version
=
version
)
root_path
,
info_val_path
,
version
=
version
)
create_groundtruth_database
(
create_groundtruth_database
(
dataset_name
,
root_path
,
info_prefix
,
dataset_name
,
root_path
,
info_prefix
,
f
'
{
out_dir
}
/
{
info_prefix
}
_infos_train.pkl'
)
'{}/{}_infos_train.pkl'
.
format
(
out_dir
,
info_prefix
))
def
lyft_data_prep
(
root_path
,
info_prefix
,
version
,
dataset_name
,
out_dir
,
max_sweeps
=
10
):
"""Prepare data related to Lyft dataset.
Related data consists of '.pkl' files recording basic infos,
and 2D annotations.
Although the ground truth database is not used in Lyft, it can also be
generated like nuScenes.
Args:
root_path (str): Path of dataset root.
info_prefix (str): The prefix of info filenames.
version (str): Dataset version.
dataset_name (str): The dataset class name.
out_dir (str): Output directory of the groundtruth database info.
Not used here if the groundtruth database is not generated.
max_sweeps (int): Number of input consecutive frames. Default: 10
"""
lyft_converter
.
create_lyft_infos
(
root_path
,
info_prefix
,
version
=
version
,
max_sweeps
=
max_sweeps
)
if
version
==
'v1.01-test'
:
return
train_info_name
=
f
'
{
info_prefix
}
_infos_train'
val_info_name
=
f
'
{
info_prefix
}
_infos_val'
info_train_path
=
osp
.
join
(
root_path
,
f
'
{
train_info_name
}
.pkl'
)
info_val_path
=
osp
.
join
(
root_path
,
f
'
{
val_info_name
}
.pkl'
)
lyft_converter
.
export_2d_annotation
(
root_path
,
info_train_path
,
version
=
version
)
lyft_converter
.
export_2d_annotation
(
root_path
,
info_val_path
,
version
=
version
)
def
scannet_data_prep
(
root_path
,
info_prefix
,
out_dir
,
workers
):
def
scannet_data_prep
(
root_path
,
info_prefix
,
out_dir
,
workers
):
"""Prepare the info file for scannet dataset.
Args:
root_path (str): Path of dataset root.
info_prefix (str): The prefix of info filenames.
out_dir (str): Output directory of the generated info file.
workers (int): Number of threads to be used.
"""
indoor
.
create_indoor_info_file
(
indoor
.
create_indoor_info_file
(
root_path
,
info_prefix
,
out_dir
,
workers
=
workers
)
root_path
,
info_prefix
,
out_dir
,
workers
=
workers
)
def
sunrgbd_data_prep
(
root_path
,
info_prefix
,
out_dir
,
workers
):
def
sunrgbd_data_prep
(
root_path
,
info_prefix
,
out_dir
,
workers
):
"""Prepare the info file for sunrgbd dataset.
Args:
root_path (str): Path of dataset root.
info_prefix (str): The prefix of info filenames.
out_dir (str): Output directory of the generated info file.
workers (int): Number of threads to be used.
"""
indoor
.
create_indoor_info_file
(
indoor
.
create_indoor_info_file
(
root_path
,
info_prefix
,
out_dir
,
workers
=
workers
)
root_path
,
info_prefix
,
out_dir
,
workers
=
workers
)
...
@@ -117,6 +196,23 @@ if __name__ == '__main__':
...
@@ -117,6 +196,23 @@ if __name__ == '__main__':
dataset_name
=
'NuScenesDataset'
,
dataset_name
=
'NuScenesDataset'
,
out_dir
=
args
.
out_dir
,
out_dir
=
args
.
out_dir
,
max_sweeps
=
args
.
max_sweeps
)
max_sweeps
=
args
.
max_sweeps
)
elif
args
.
dataset
==
'lyft'
:
train_version
=
f
'
{
args
.
version
}
-train'
lyft_data_prep
(
root_path
=
args
.
root_path
,
info_prefix
=
args
.
extra_tag
,
version
=
train_version
,
dataset_name
=
'LyftDataset'
,
out_dir
=
args
.
out_dir
,
max_sweeps
=
args
.
max_sweeps
)
test_version
=
f
'
{
args
.
version
}
-test'
lyft_data_prep
(
root_path
=
args
.
root_path
,
info_prefix
=
args
.
extra_tag
,
version
=
test_version
,
dataset_name
=
'LyftDataset'
,
out_dir
=
args
.
out_dir
,
max_sweeps
=
args
.
max_sweeps
)
elif
args
.
dataset
==
'scannet'
:
elif
args
.
dataset
==
'scannet'
:
scannet_data_prep
(
scannet_data_prep
(
root_path
=
args
.
root_path
,
root_path
=
args
.
root_path
,
...
...
tools/data_converter/kitti_converter.py
View file @
98cfb2ee
...
@@ -13,6 +13,9 @@ def convert_to_kitti_info_version2(info):
...
@@ -13,6 +13,9 @@ def convert_to_kitti_info_version2(info):
Args:
Args:
info (dict): Info of the input kitti data.
info (dict): Info of the input kitti data.
- image (dict): image info
- calib (dict): calibration info
- point_cloud (dict): point cloud info
"""
"""
if
'image'
not
in
info
or
'calib'
not
in
info
or
'point_cloud'
not
in
info
:
if
'image'
not
in
info
or
'calib'
not
in
info
or
'point_cloud'
not
in
info
:
info
[
'image'
]
=
{
info
[
'image'
]
=
{
...
@@ -194,6 +197,20 @@ def create_reduced_point_cloud(data_path,
...
@@ -194,6 +197,20 @@ def create_reduced_point_cloud(data_path,
test_info_path
=
None
,
test_info_path
=
None
,
save_path
=
None
,
save_path
=
None
,
with_back
=
False
):
with_back
=
False
):
"""Create reduced point cloud info file.
Args:
data_path (str): Path of original infos.
pkl_prefix (str): Prefix of info files.
train_info_path (str | None): Path of training set info.
Default: None.
val_info_path (str | None): Path of validation set info.
Default: None.
test_info_path (str | None): Path of test set info.
Default: None.
save_path (str | None): Path to save reduced info.
with_back (bool | None): Whether to create backup info.
"""
if
train_info_path
is
None
:
if
train_info_path
is
None
:
train_info_path
=
Path
(
data_path
)
/
f
'
{
pkl_prefix
}
_infos_train.pkl'
train_info_path
=
Path
(
data_path
)
/
f
'
{
pkl_prefix
}
_infos_train.pkl'
if
val_info_path
is
None
:
if
val_info_path
is
None
:
...
...
tools/data_converter/lyft_converter.py
0 → 100644
View file @
98cfb2ee
import
os.path
as
osp
import
mmcv
import
numpy
as
np
from
lyft_dataset_sdk.lyftdataset
import
LyftDataset
as
Lyft
from
pyquaternion
import
Quaternion
from
mmdet3d.datasets
import
LyftDataset
from
.nuscenes_converter
import
(
get_2d_boxes
,
get_available_scenes
,
obtain_sensor2top
)
lyft_categories
=
(
'car'
,
'truck'
,
'bus'
,
'emergency_vehicle'
,
'other_vehicle'
,
'motorcycle'
,
'bicycle'
,
'pedestrian'
,
'animal'
)
def
create_lyft_infos
(
root_path
,
info_prefix
,
version
=
'v1.01-train'
,
max_sweeps
=
10
):
"""Create info file of lyft dataset.
Given the raw data, generate its related info file in pkl format.
Args:
root_path (str): Path of the data root.
info_prefix (str): Prefix of the info file to be generated.
version (str): Version of the data.
Default: 'v1.01-train'
max_sweeps (int): Max number of sweeps.
Default: 10
"""
lyft
=
Lyft
(
data_path
=
osp
.
join
(
root_path
,
version
),
json_path
=
osp
.
join
(
root_path
,
version
,
version
),
verbose
=
True
)
available_vers
=
[
'v1.01-train'
,
'v1.01-test'
]
assert
version
in
available_vers
if
version
==
'v1.01-train'
:
train_scenes
=
mmcv
.
list_from_file
(
'data/lyft/train.txt'
)
val_scenes
=
mmcv
.
list_from_file
(
'data/lyft/val.txt'
)
elif
version
==
'v1.01-test'
:
train_scenes
=
mmcv
.
list_from_file
(
'data/lyft/test.txt'
)
val_scenes
=
[]
else
:
raise
ValueError
(
'unknown'
)
# filter existing scenes.
available_scenes
=
get_available_scenes
(
lyft
)
available_scene_names
=
[
s
[
'name'
]
for
s
in
available_scenes
]
train_scenes
=
list
(
filter
(
lambda
x
:
x
in
available_scene_names
,
train_scenes
))
val_scenes
=
list
(
filter
(
lambda
x
:
x
in
available_scene_names
,
val_scenes
))
train_scenes
=
set
([
available_scenes
[
available_scene_names
.
index
(
s
)][
'token'
]
for
s
in
train_scenes
])
val_scenes
=
set
([
available_scenes
[
available_scene_names
.
index
(
s
)][
'token'
]
for
s
in
val_scenes
])
test
=
'test'
in
version
if
test
:
print
(
f
'test scene:
{
len
(
train_scenes
)
}
'
)
else
:
print
(
f
'train scene:
{
len
(
train_scenes
)
}
,
\
val scene:
{
len
(
val_scenes
)
}
'
)
train_lyft_infos
,
val_lyft_infos
=
_fill_trainval_infos
(
lyft
,
train_scenes
,
val_scenes
,
test
,
max_sweeps
=
max_sweeps
)
metadata
=
dict
(
version
=
version
)
if
test
:
print
(
f
'test sample:
{
len
(
train_lyft_infos
)
}
'
)
data
=
dict
(
infos
=
train_lyft_infos
,
metadata
=
metadata
)
info_name
=
f
'
{
info_prefix
}
_infos_test'
info_path
=
osp
.
join
(
root_path
,
f
'
{
info_name
}
.pkl'
)
mmcv
.
dump
(
data
,
info_path
)
else
:
print
(
f
'train sample:
{
len
(
train_lyft_infos
)
}
,
\
val sample:
{
len
(
val_lyft_infos
)
}
'
)
data
=
dict
(
infos
=
train_lyft_infos
,
metadata
=
metadata
)
train_info_name
=
f
'
{
info_prefix
}
_infos_train'
info_path
=
osp
.
join
(
root_path
,
f
'
{
train_info_name
}
.pkl'
)
mmcv
.
dump
(
data
,
info_path
)
data
[
'infos'
]
=
val_lyft_infos
val_info_name
=
f
'
{
info_prefix
}
_infos_val'
info_val_path
=
osp
.
join
(
root_path
,
f
'
{
val_info_name
}
.pkl'
)
mmcv
.
dump
(
data
,
info_val_path
)
def
_fill_trainval_infos
(
lyft
,
train_scenes
,
val_scenes
,
test
=
False
,
max_sweeps
=
10
):
"""Generate the train/val infos from the raw data.
Args:
lyft (:obj:``LyftDataset``): Dataset class in the Lyft dataset.
train_scenes (list[str]): Basic information of training scenes.
val_scenes (list[str]): Basic information of validation scenes.
test (bool): Whether use the test mode. In the test mode, no
annotations can be accessed. Default: False.
max_sweeps (int): Max number of sweeps. Default: 10.
Returns:
tuple[list[dict]]: Information of training set and
validation set that will be saved to the info file.
"""
train_lyft_infos
=
[]
val_lyft_infos
=
[]
for
sample
in
mmcv
.
track_iter_progress
(
lyft
.
sample
):
lidar_token
=
sample
[
'data'
][
'LIDAR_TOP'
]
sd_rec
=
lyft
.
get
(
'sample_data'
,
sample
[
'data'
][
'LIDAR_TOP'
])
cs_record
=
lyft
.
get
(
'calibrated_sensor'
,
sd_rec
[
'calibrated_sensor_token'
])
pose_record
=
lyft
.
get
(
'ego_pose'
,
sd_rec
[
'ego_pose_token'
])
lidar_path
,
boxes
,
_
=
lyft
.
get_sample_data
(
lidar_token
)
lidar_path
=
str
(
lidar_path
)
mmcv
.
check_file_exist
(
lidar_path
)
info
=
{
'lidar_path'
:
lidar_path
,
'token'
:
sample
[
'token'
],
'sweeps'
:
[],
'cams'
:
dict
(),
'lidar2ego_translation'
:
cs_record
[
'translation'
],
'lidar2ego_rotation'
:
cs_record
[
'rotation'
],
'ego2global_translation'
:
pose_record
[
'translation'
],
'ego2global_rotation'
:
pose_record
[
'rotation'
],
'timestamp'
:
sample
[
'timestamp'
],
}
l2e_r
=
info
[
'lidar2ego_rotation'
]
l2e_t
=
info
[
'lidar2ego_translation'
]
e2g_r
=
info
[
'ego2global_rotation'
]
e2g_t
=
info
[
'ego2global_translation'
]
l2e_r_mat
=
Quaternion
(
l2e_r
).
rotation_matrix
e2g_r_mat
=
Quaternion
(
e2g_r
).
rotation_matrix
# obtain 6 image's information per frame
camera_types
=
[
'CAM_FRONT'
,
'CAM_FRONT_RIGHT'
,
'CAM_FRONT_LEFT'
,
'CAM_BACK'
,
'CAM_BACK_LEFT'
,
'CAM_BACK_RIGHT'
,
]
for
cam
in
camera_types
:
cam_token
=
sample
[
'data'
][
cam
]
cam_path
,
_
,
cam_intrinsic
=
lyft
.
get_sample_data
(
cam_token
)
cam_info
=
obtain_sensor2top
(
lyft
,
cam_token
,
l2e_t
,
l2e_r_mat
,
e2g_t
,
e2g_r_mat
,
cam
)
cam_info
.
update
(
cam_intrinsic
=
cam_intrinsic
)
info
[
'cams'
].
update
({
cam
:
cam_info
})
# obtain sweeps for a single key-frame
sd_rec
=
lyft
.
get
(
'sample_data'
,
sample
[
'data'
][
'LIDAR_TOP'
])
sweeps
=
[]
while
len
(
sweeps
)
<
max_sweeps
:
if
not
sd_rec
[
'prev'
]
==
''
:
sweep
=
obtain_sensor2top
(
lyft
,
sd_rec
[
'prev'
],
l2e_t
,
l2e_r_mat
,
e2g_t
,
e2g_r_mat
,
'lidar'
)
sweeps
.
append
(
sweep
)
sd_rec
=
lyft
.
get
(
'sample_data'
,
sd_rec
[
'prev'
])
else
:
break
info
[
'sweeps'
]
=
sweeps
# obtain annotation
if
not
test
:
annotations
=
[
lyft
.
get
(
'sample_annotation'
,
token
)
for
token
in
sample
[
'anns'
]
]
locs
=
np
.
array
([
b
.
center
for
b
in
boxes
]).
reshape
(
-
1
,
3
)
dims
=
np
.
array
([
b
.
wlh
for
b
in
boxes
]).
reshape
(
-
1
,
3
)
rots
=
np
.
array
([
b
.
orientation
.
yaw_pitch_roll
[
0
]
for
b
in
boxes
]).
reshape
(
-
1
,
1
)
names
=
[
b
.
name
for
b
in
boxes
]
for
i
in
range
(
len
(
names
)):
if
names
[
i
]
in
LyftDataset
.
NameMapping
:
names
[
i
]
=
LyftDataset
.
NameMapping
[
names
[
i
]]
names
=
np
.
array
(
names
)
# we need to convert rot to SECOND format.
gt_boxes
=
np
.
concatenate
([
locs
,
dims
,
-
rots
-
np
.
pi
/
2
],
axis
=
1
)
assert
len
(
gt_boxes
)
==
len
(
annotations
),
f
'
{
len
(
gt_boxes
)
}
,
{
len
(
annotations
)
}
'
info
[
'gt_boxes'
]
=
gt_boxes
info
[
'gt_names'
]
=
names
info
[
'num_lidar_pts'
]
=
np
.
array
(
[
a
[
'num_lidar_pts'
]
for
a
in
annotations
])
info
[
'num_radar_pts'
]
=
np
.
array
(
[
a
[
'num_radar_pts'
]
for
a
in
annotations
])
if
sample
[
'scene_token'
]
in
train_scenes
:
train_lyft_infos
.
append
(
info
)
else
:
val_lyft_infos
.
append
(
info
)
return
train_lyft_infos
,
val_lyft_infos
def
export_2d_annotation
(
root_path
,
info_path
,
version
):
"""Export 2d annotation from the info file and raw data.
Args:
root_path (str): Root path of the raw data.
info_path (str): Path of the info file.
version (str): Dataset version.
"""
# get bbox annotations for camera
camera_types
=
[
'CAM_FRONT'
,
'CAM_FRONT_RIGHT'
,
'CAM_FRONT_LEFT'
,
'CAM_BACK'
,
'CAM_BACK_LEFT'
,
'CAM_BACK_RIGHT'
,
]
lyft_infos
=
mmcv
.
load
(
info_path
)[
'infos'
]
lyft
=
Lyft
(
data_path
=
osp
.
join
(
root_path
,
version
),
json_path
=
osp
.
join
(
root_path
,
version
,
version
),
verbose
=
True
)
# info_2d_list = []
cat2Ids
=
[
dict
(
id
=
lyft_categories
.
index
(
cat_name
),
name
=
cat_name
)
for
cat_name
in
lyft_categories
]
coco_ann_id
=
0
coco_2d_dict
=
dict
(
annotations
=
[],
images
=
[],
categories
=
cat2Ids
)
for
info
in
mmcv
.
track_iter_progress
(
lyft_infos
):
for
cam
in
camera_types
:
cam_info
=
info
[
'cams'
][
cam
]
coco_infos
=
get_2d_boxes
(
lyft
,
cam_info
[
'sample_data_token'
],
visibilities
=
[
''
,
'1'
,
'2'
,
'3'
,
'4'
])
(
height
,
width
,
_
)
=
mmcv
.
imread
(
cam_info
[
'data_path'
]).
shape
coco_2d_dict
[
'images'
].
append
(
dict
(
file_name
=
cam_info
[
'data_path'
],
id
=
cam_info
[
'sample_data_token'
],
width
=
width
,
height
=
height
))
for
coco_info
in
coco_infos
:
if
coco_info
is
None
:
continue
# add an empty key for coco format
coco_info
[
'segmentation'
]
=
[]
coco_info
[
'id'
]
=
coco_ann_id
coco_2d_dict
[
'annotations'
].
append
(
coco_info
)
coco_ann_id
+=
1
mmcv
.
dump
(
coco_2d_dict
,
f
'
{
info_path
[:
-
4
]
}
.coco.json'
)
tools/data_converter/nuscenes_converter.py
View file @
98cfb2ee
...
@@ -50,7 +50,7 @@ def create_nuscenes_infos(root_path,
...
@@ -50,7 +50,7 @@ def create_nuscenes_infos(root_path,
raise
ValueError
(
'unknown'
)
raise
ValueError
(
'unknown'
)
# filter existing scenes.
# filter existing scenes.
available_scenes
=
_
get_available_scenes
(
nusc
)
available_scenes
=
get_available_scenes
(
nusc
)
available_scene_names
=
[
s
[
'name'
]
for
s
in
available_scenes
]
available_scene_names
=
[
s
[
'name'
]
for
s
in
available_scenes
]
train_scenes
=
list
(
train_scenes
=
list
(
filter
(
lambda
x
:
x
in
available_scene_names
,
train_scenes
))
filter
(
lambda
x
:
x
in
available_scene_names
,
train_scenes
))
...
@@ -93,7 +93,19 @@ def create_nuscenes_infos(root_path,
...
@@ -93,7 +93,19 @@ def create_nuscenes_infos(root_path,
mmcv
.
dump
(
data
,
info_val_path
)
mmcv
.
dump
(
data
,
info_val_path
)
def
_get_available_scenes
(
nusc
):
def
get_available_scenes
(
nusc
):
"""Get available scenes from the input nuscenes class.
Given the raw data, get the information of available scenes for
further info generation.
Args:
nusc (class): Dataset class in the nuScenes dataset.
Returns:
available_scenes (list[dict]): List of basic information for the
available scenes.
"""
available_scenes
=
[]
available_scenes
=
[]
print
(
'total scene num: {}'
.
format
(
len
(
nusc
.
scene
)))
print
(
'total scene num: {}'
.
format
(
len
(
nusc
.
scene
)))
for
scene
in
nusc
.
scene
:
for
scene
in
nusc
.
scene
:
...
@@ -105,6 +117,7 @@ def _get_available_scenes(nusc):
...
@@ -105,6 +117,7 @@ def _get_available_scenes(nusc):
scene_not_exist
=
False
scene_not_exist
=
False
while
has_more_frames
:
while
has_more_frames
:
lidar_path
,
boxes
,
_
=
nusc
.
get_sample_data
(
sd_rec
[
'token'
])
lidar_path
,
boxes
,
_
=
nusc
.
get_sample_data
(
sd_rec
[
'token'
])
lidar_path
=
str
(
lidar_path
)
if
not
mmcv
.
is_filepath
(
lidar_path
):
if
not
mmcv
.
is_filepath
(
lidar_path
):
scene_not_exist
=
True
scene_not_exist
=
True
break
break
...
@@ -126,6 +139,20 @@ def _fill_trainval_infos(nusc,
...
@@ -126,6 +139,20 @@ def _fill_trainval_infos(nusc,
val_scenes
,
val_scenes
,
test
=
False
,
test
=
False
,
max_sweeps
=
10
):
max_sweeps
=
10
):
"""Generate the train/val infos from the raw data.
Args:
nusc (:obj:``NuScenes``): Dataset class in the nuScenes dataset.
train_scenes (list[str]): Basic information of training scenes.
val_scenes (list[str]): Basic information of validation scenes.
test (bool): Whether use the test mode. In the test mode, no
annotations can be accessed. Default: False.
max_sweeps (int): Max number of sweeps. Default: 10.
Returns:
tuple[list[dict]]: Information of training set and validation set
that will be saved to the info file.
"""
train_nusc_infos
=
[]
train_nusc_infos
=
[]
val_nusc_infos
=
[]
val_nusc_infos
=
[]
...
@@ -137,7 +164,7 @@ def _fill_trainval_infos(nusc,
...
@@ -137,7 +164,7 @@ def _fill_trainval_infos(nusc,
pose_record
=
nusc
.
get
(
'ego_pose'
,
sd_rec
[
'ego_pose_token'
])
pose_record
=
nusc
.
get
(
'ego_pose'
,
sd_rec
[
'ego_pose_token'
])
lidar_path
,
boxes
,
_
=
nusc
.
get_sample_data
(
lidar_token
)
lidar_path
,
boxes
,
_
=
nusc
.
get_sample_data
(
lidar_token
)
mmcv
.
check_file_exist
(
lidar_path
,
msg_tmpl
=
'file "{}" does not exist.'
)
mmcv
.
check_file_exist
(
lidar_path
)
info
=
{
info
=
{
'lidar_path'
:
lidar_path
,
'lidar_path'
:
lidar_path
,
...
@@ -238,13 +265,28 @@ def obtain_sensor2top(nusc,
...
@@ -238,13 +265,28 @@ def obtain_sensor2top(nusc,
e2g_t
,
e2g_t
,
e2g_r_mat
,
e2g_r_mat
,
sensor_type
=
'lidar'
):
sensor_type
=
'lidar'
):
"""Obtain the info with RT matric from general sensor to Top LiDAR
"""Obtain the info with RT matric from general sensor to Top LiDAR.
Args:
nusc (class): Dataset class in the nuScenes dataset.
sensor_token (str): Sample data token corresponding to the
specific sensor type.
l2e_t (np.ndarray): Translation from lidar to ego in shape (1, 3).
l2e_r_mat (np.ndarray): Rotation matrix from lidar to ego
in shape (3, 3).
e2g_t (np.ndarray): Translation from ego to global in shape (1, 3).
e2g_r_mat (np.ndarray): Rotation matrix from ego to global
in shape (3, 3).
sensor_type (str): Sensor to calibrate. Default: 'lidar'.
Returns:
sweep (dict): Sweep information after transformation.
"""
"""
sd_rec
=
nusc
.
get
(
'sample_data'
,
sensor_token
)
sd_rec
=
nusc
.
get
(
'sample_data'
,
sensor_token
)
cs_record
=
nusc
.
get
(
'calibrated_sensor'
,
cs_record
=
nusc
.
get
(
'calibrated_sensor'
,
sd_rec
[
'calibrated_sensor_token'
])
sd_rec
[
'calibrated_sensor_token'
])
pose_record
=
nusc
.
get
(
'ego_pose'
,
sd_rec
[
'ego_pose_token'
])
pose_record
=
nusc
.
get
(
'ego_pose'
,
sd_rec
[
'ego_pose_token'
])
data_path
=
nusc
.
get_sample_data_path
(
sd_rec
[
'token'
])
data_path
=
str
(
nusc
.
get_sample_data_path
(
sd_rec
[
'token'
])
)
sweep
=
{
sweep
=
{
'data_path'
:
data_path
,
'data_path'
:
data_path
,
'type'
:
sensor_type
,
'type'
:
sensor_type
,
...
@@ -276,6 +318,13 @@ def obtain_sensor2top(nusc,
...
@@ -276,6 +318,13 @@ def obtain_sensor2top(nusc,
def
export_2d_annotation
(
root_path
,
info_path
,
version
):
def
export_2d_annotation
(
root_path
,
info_path
,
version
):
"""Export 2d annotation from the info file and raw data.
Args:
root_path (str): Root path of the raw data.
info_path (str): Path of the info file.
version (str): Dataset version.
"""
# get bbox annotations for camera
# get bbox annotations for camera
camera_types
=
[
camera_types
=
[
'CAM_FRONT'
,
'CAM_FRONT'
,
...
@@ -295,9 +344,6 @@ def export_2d_annotation(root_path, info_path, version):
...
@@ -295,9 +344,6 @@ def export_2d_annotation(root_path, info_path, version):
coco_ann_id
=
0
coco_ann_id
=
0
coco_2d_dict
=
dict
(
annotations
=
[],
images
=
[],
categories
=
cat2Ids
)
coco_2d_dict
=
dict
(
annotations
=
[],
images
=
[],
categories
=
cat2Ids
)
for
info
in
mmcv
.
track_iter_progress
(
nusc_infos
):
for
info
in
mmcv
.
track_iter_progress
(
nusc_infos
):
# info_2d = dict(token=info['token'],
# timestamp=info['timestamp'],
# cams=dict())
for
cam
in
camera_types
:
for
cam
in
camera_types
:
cam_info
=
info
[
'cams'
][
cam
]
cam_info
=
info
[
'cams'
][
cam
]
coco_infos
=
get_2d_boxes
(
coco_infos
=
get_2d_boxes
(
...
@@ -319,27 +365,7 @@ def export_2d_annotation(root_path, info_path, version):
...
@@ -319,27 +365,7 @@ def export_2d_annotation(root_path, info_path, version):
coco_info
[
'id'
]
=
coco_ann_id
coco_info
[
'id'
]
=
coco_ann_id
coco_2d_dict
[
'annotations'
].
append
(
coco_info
)
coco_2d_dict
[
'annotations'
].
append
(
coco_info
)
coco_ann_id
+=
1
coco_ann_id
+=
1
# gt_bbox_2d = [res['bbox_corners'] for res in anno_info]
mmcv
.
dump
(
coco_2d_dict
,
f
'
{
info_path
[:
-
4
]
}
.coco.json'
)
# gt_names_2d = [res['category_name'] for res in anno_info]
# for i in range(len(gt_names_2d)):
# if gt_names_2d[i] in NuScenesDataset.NameMapping:
# gt_names_2d[i] = NuScenesDataset.NameMapping[
# gt_names_2d[i]]
# assert len(gt_bbox_2d) == len(gt_names_2d)
# gt_bbox_2d = np.array(gt_bbox_2d, dtype=np.float32)
# gt_names_2d = np.array(gt_names_2d)
# info_2d['cams'][cam] = dict(
# data_path=info['cams'][cam]['data_path'],
# type=info['cams'][cam]['type'],
# token=info['cams'][cam]['sample_data_token'],
# gt_boxes=gt_bbox_2d,
# gt_names=gt_names_2d)
# info_2d_list.append(info_2d)
# mmcv.dump(
# info_2d_list,
# osp.join(root_path,
# '{}_2d_infos_train.pkl'.format(info_prefix)))
mmcv
.
dump
(
coco_2d_dict
,
'{}.coco.json'
.
format
(
info_path
[:
-
4
]))
def
get_2d_boxes
(
nusc
,
sample_data_token
:
str
,
def
get_2d_boxes
(
nusc
,
sample_data_token
:
str
,
...
@@ -351,7 +377,7 @@ def get_2d_boxes(nusc, sample_data_token: str,
...
@@ -351,7 +377,7 @@ def get_2d_boxes(nusc, sample_data_token: str,
visibilities: Visibility filter.
visibilities: Visibility filter.
Return:
Return:
list: List of 2D annotation record that belongs to the input
list
[dict]
: List of 2D annotation record that belongs to the input
`sample_data_token`.
`sample_data_token`.
"""
"""
...
@@ -436,12 +462,14 @@ def post_process_coords(
...
@@ -436,12 +462,14 @@ def post_process_coords(
bbox corners and the image canvas, return None if no
bbox corners and the image canvas, return None if no
intersection.
intersection.
corner_coords: Corner coordinates of reprojected bounding box.
Args:
imsize: Size of the image canvas.
corner_coords (list[int]): Corner coordinates of reprojected
bounding box.
imsize (tuple[int]): Size of the image canvas.
Return:
Return:
Intersection of the convex hull of the 2D box
corners and the image
tuple [float]:
Intersection of the convex hull of the 2D box
canvas.
corners and the image
canvas.
"""
"""
polygon_from_2d_box
=
MultiPoint
(
corner_coords
).
convex_hull
polygon_from_2d_box
=
MultiPoint
(
corner_coords
).
convex_hull
img_canvas
=
box
(
0
,
0
,
imsize
[
0
],
imsize
[
1
])
img_canvas
=
box
(
0
,
0
,
imsize
[
0
],
imsize
[
1
])
...
@@ -466,15 +494,26 @@ def generate_record(ann_rec: dict, x1: float, y1: float, x2: float, y2: float,
...
@@ -466,15 +494,26 @@ def generate_record(ann_rec: dict, x1: float, y1: float, x2: float, y2: float,
"""
"""
Generate one 2D annotation record given various informations on
Generate one 2D annotation record given various informations on
top of the 2D bounding box coordinates.
top of the 2D bounding box coordinates.
:param ann_rec: Original 3d annotation record.
:param x1: Minimum value of the x coordinate.
Args:
:param y1: Minimum value of the y coordinate.
ann_rec (dict): Original 3d annotation record.
:param x2: Maximum value of the x coordinate.
x1 (float): Minimum value of the x coordinate.
:param y2: Maximum value of the y coordinate.
y1 (float): Minimum value of the y coordinate.
:param sample_data_token: Sample data token.
x2 (float): Maximum value of the x coordinate.
:param filename:The corresponding image file where the annotation
y2 (float): Maximum value of the y coordinate.
sample_data_token (str): Sample data token.
filename (str):The corresponding image file where the annotation
is present.
is present.
:return: A sample 2D annotation record.
Returns:
dict: A sample 2D annotation record.
- file_name (str): flie name
- image_id (str): sample data token
- area (float): 2d box area
- category_name (str): category name
- category_id (int): category id
- bbox (list[float]): left x, top y, dx, dy of 2d box
- iscrowd (int): whether the area is crowd
"""
"""
repro_rec
=
OrderedDict
()
repro_rec
=
OrderedDict
()
repro_rec
[
'sample_data_token'
]
=
sample_data_token
repro_rec
[
'sample_data_token'
]
=
sample_data_token
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment