[["\nAsk HN: What have you completed in 2015 - lakeeffect\nWhat have you completed in 2015 or are looking to release in 2016. I use completed loosely. Would love to hear some progress to motivate me into the new year.\n======\nionicabizau\nA lot of open-source stuff. The best ones were:\n\n\\- [https://github.com/IonicaBizau/git-\nstats](https://github.com/IonicaBizau/git-stats) (January, 2015) \\-\n[https://github.com/IonicaBizau/node-\ncobol](https://github.com/IonicaBizau/node-cobol) (October, 2015) \\-\n[https://github.com/IonicaBizau/gridly](https://github.com/IonicaBizau/gridly)\n(December, 2015)\n\nWorking part-time (~100 hours / month) for the company I work for, I still\nhave enough \"free\" time when I do: open-source stuff, JavaScript training,\nplaying piano, playing with high-voltage (!), playing with chemistry\nexplosions and experiments (in fact making fire almost anywhere, anytime).\n\nOne of this year goals was to drop out of college. I did it two months ago.\nSince officially I'm still a student (my documents are at the university), I'm\nstill getting loans because of my good results from my previous year. But I\ndon't regret this decission at all (at least, until now!).\n\nAll the thanks go to God! I enjoy being a Jesus follower. I believe this world\nis not our home. God prepares for us a better world. Until then, I'm happy to\nlove Him. Actually, being a believer and web developer is a nice combination.\n\nWish you a happy 2016! :-)\n\n"],["\nGoogle relents slightly on blocking ad-blockers \u2013 for paid-up enterprise Chrome - nachtigall\nhttps://www.theregister.co.uk/2019/05/29/google_webrequest_api/\n======\nlern_too_spel\nNow Chrome will have the crippled adblocking capabilities of Safari. This is\nthe browser equivalent of removing the headphone jack \u2014 removing a feature\nmany people use to get some security benefit for a few. The problem for Google\nis that unlike Apple's customers, you can't pee on a Chrome or Android user's\nback and tell them it's raining. No Android user was happy the headphone jack\nwent away just because there are wireless options.\n\nSince Google controls the extension distribution system, it could just as\neasily plastered extensions that use this API with scary warnings, so only\nusers who knew what they were getting into would install them. It's not like\nusers install so many extensions that use this API that they would start to\nignore the warnings.\n\n------\nohpls\nAt least whatever Google decides to do I'll still have my Pi-hole blocking ads\nand trackers\n\n~~~\ndanShumway\nDomain-based filtering isn't enough to block all ads and trackers -- unless\nPi-hole is doing more than just acting as a DNS server nowadays; I haven't\nchecked in a while.\n\nIn particular, using Pi-hole forces you to decide globally what domains you'll\nblock -- so you can't (for example) block Twitter/Facebook on 3rd-party\ndomains but allow it when you directly visit them. DNS blocking also can't\nhandle individual URLs within a domain -- so you won't be able to block ads on\nsites like Youtube or Facebook.\n\nAside from lacking granularity for when domains are allowed or disallowed, Pi-\nhole also won't protect you from the majority of first-party tracking. That's\nless of a concern though because (at least for now) the V3 manifest isn't\nstopping extensions from blocking tracking cookies or disabling features like\nCanvas, so you can still rely on them for that.\n\nTypically though, I advise people to prefer extensions like UMatrix and Ublock\nOrigin, and to fall back on Pi-hole as a backup strategy when nothing else is\navailable. It's useful (particularly to help with native apps and IOT\ndevices), but I don't think it's a substitute for a good browser-based ad\nblocker.\n\n------\niamthatiam\nDoes this impact chromium? Will Brave Browser continue to function?\n\n~~~\nrasz\nYes it does, and will put a burden on forks to maintain their own patch tree.\nVivaldi already semi declared unwillingness to do it.\n\n------\nanfilt\nThat does not really seem like a \"relent\"...\n\n------\nm-p-3\nIf that's not a good reason enough to switch back to Firefox, then I don't\nknow what will.\n\n------\nwinkeltripel\nStill looks like scummy behaviour to me.\n\n"],["\n\nPosterous (YC S08) launches group blogs that are also email lists - rantfoil\nhttp://mashable.com/2009/05/05/posterous-email-lists/\n\n======\njonas_b\nI'm not sure, but this, or some evolution of this feature, could turn out to\nbe a real revolution when it comes to group collaboration and sharing.\n\nOr it might just be another feature, or that I'm trying to see something that\nisn't there.\n\nI've been searching for a merge between this, Chatterous and possibly etherpad\nfor small-group collab. Alas, it eludes me still.\n\n~~~\nrantfoil\nWe're definitely excited about the potential for this feature to grow into its\nown product!\n\nYou can expect improvements to this coming fast and furious.\n\n------\nzaidf\nThis is ridiculously awesome and the exact feature we needed few months ago.\n\nWe started a blog for our larger extended family(50+ people) so there is a\nsimple place for all our family communication. Yet a lot of the people in our\nfamily know only rudimentary use of the computer--and that means email. So the\nblog idea didn't quite work out and we're back to emailing--which is\ndisorganized but works.\n\nWith this we can get the best of both worlds! Communicate via email, archive\non a blog!\n\n~~~\nrantfoil\nWould love for you to use Posterous for this, and would love to chat with you\noffline about how it works out for your team and how we can get better. My\ncontact info is in my profile. =)\n\n------\nhboon\nI'm more interested in how you got Mashable to cover this (so quickly).\nAnything you could share there? (yes, PR-related question).\n\nAwesome feature, I'd imagine you will be using this mechanism to allow\ndevelopers against your API to track API changes? :)\n\n~~~\nrantfoil\nFor a brand that's gaining momentum, getting coverage is a bit easier. You\nmight already follow the writers on twitter or vice versa, or if you're in\nSF/the valley you might even grab coffee with them.\n\nFor a company starting out, you need to either have a product so thoroughly\nbadass that it trounces something else that is hot, or you need a connection /\nintro to someone who knows them well.\n\nIt doesn't have to be Michael Arrington himself -- in fact, he's obviously\nsuch a busy and important guy that it's almost impossible to get his\nattention. However, at each of these blogs there are staff writers. They're\nthe ideal person to reach out to -- they're looking for great stories, and\nhey, you've got one.\n\nFinal tip: use bullet points. Include screenshots if you can. If you make it\nso compelling and so obvious that it's a story, and you practically write it\nfor them, you make their life easier and that makes it a no-brainer for them\nto write about you.\n\nAs for API -- that sounds very cool. We're very psyched about becoming a more\nopen platform to let people build apps on top of Posterous.\n\n------\njcbozonier\nI'm in love. May I have permission to marry your daughter?\n\n[I just realized we don't do humor here. Mod me down :(]\n\n------\ntybris\nWoah! I can see how this is different from an e-mail list that gets published\non the web. ;-)\n\nInteresting to see how a new layer of paint can bring old concepts to the\nmasses.\n\n~~~\nmadh\nAbsolutely. Great ideas never die.\n\n------\njoepestro\nPosterous is great. I'm interested to know - was this something that you've\nhad planned for a while, or did it come as a natural evolution of the product\n/ feedback from users?\n\n~~~\nrantfoil\nIt's something we talked about even back during last summer when we first\nlaunched Posterous. It's great to finally be able to put it out there.\n\nOnce we launched group blogs though -- we did start hearing a lot of requests\nfor this feature too. There's absolutely a validation piece to it. When users\nask for it, you know there's something there.\n\n------\nqeorge\nI've got a Google Group with old college friends which we've always wanted to\nbe richer, but we don't want to lose the casual members by trying to migrate\nand change their habits. I think hooking it up to a group posterous account\nmight be the right way to please everyone.\n\nVery cool.\n\n------\nthorax\nI wonder-- are they stripping out the reply/quotations somehow, or are those\ngoing to be showing up on the blog, too? Long, long streams of\n>>>>>>>>>>>>>||||>>>> make for crummy blogging.\n\n~~~\njoepestro\nGood point. I did an experiment a few weeks ago to see how this was handled on\nposterous by sending an email to a friend and cc'ing post@posterous.com.\n\nIt was handled well on their end with a solid line on the left side of the\nreply. So it looks like something they are already prepared for.\n\n------\nanigbrowl\nI like it, but it fails to load 3 out of 5 times when I click on a Posterous\nlink. First day traffic blues?\n\n"],["\nThe Number of New Bitcoin Accounts Is Skyrocketing - petethomas\nhttps://www.bloomberg.com/news/articles/2017-11-27/new-crypto-accounts-proliferate-as-bitcoin-flirts-with-10-000\n======\nblackflame7000\nAt what point does it become speculation? Perhaps someone with more knowledge\ncan shed some light on what exactly people are buying when they buy a bitcoin.\n\n"],["\nSymbolic expressions can be automatically differentiated too - objections\nhttp://h2.jaguarpaw.co.uk/posts/symbolic-expressions-can-be-automatically-differentiated/\n======\njohnbender\nOne can also calculate the derivative of a context free grammar with respect\nto a given terminal.\n\n[http://matt.might.net/articles/parsing-with-\nderivatives/](http://matt.might.net/articles/parsing-with-derivatives/)\n\n~~~\nApanatshka\nThat's also a really cool article. Thanks for sharing it!\n\n------\ndelluminatus\nGreat post, as an AD tutorial and as a (an?) Haskell exercise. Having known\nnothing about AD before, I feel like I have a good understanding of what it is\n-- as he says, it's so simple -- but I don't understand _why_ the algorithm is\nso much faster. Just looking at the differentiator function and the AD\nfunction, it actually appears that the AD should take longer because it does\nmore computation per step (both the function and the derivative). But it seems\nlike every article or paper is talking about how to implement AD, not why the\nalgorithm is so efficient. Does anyone happen to know of a good article or\npaper about that? Ideally, one just as nice and comprehensible as this!\n\n~~~\nvidarh\nThe first alternative builds a large tree structure, and then evaluates the\nwhole tree structure afterwards.\n\nSo first it blows up the size of the expression to process and _then_ it\ncalculates the result. A lot of those calculations will be redundant\n\nThe second one not only avoids evaluating the tree separately, but \"prunes\" a\nlot of the evaluation automatically by effectively short-circuiting the whole\nprocess. Consider (with the caveat that my Haskell understanding is\nrudimentary at best and it was about 20 years since I last did anything\ninvolving symbolic differentiation) :\n\n\n\n Product e e' -> Product e (diff e') `Sum` Product (diff e) e'\n\n\nFollowed by a separate run with:\n\n\n\n Product e e' -> ev e * ev e'\n\n\nvs\n\n\n\n Product e e' -> let (ex, ed) = ev e\n (ex', ed') = ev e'\n in (ex * ex', ex * ed' + ed * ex')\n\n\n(I pick the \"Product\" rule as an example because it is one of the ones that\nblows up the size of the tree)\n\nLets say you do something simple like Product X X. You get Sum (Product X One)\n(Product One X) out, and then you have to evaluate each node.\n\nIn the second case, you match Product e e'. You process X and assign (x,1) to\n(ex,ed), and process the second X and assign (x,1) to (ex', ed'), and then\nreturn (ex * ex', ex * ed' \\+ ed * ex').\n\nIn the first case, you've first differentiated 3 nodes (Product + 2x \"X\"),\nthen evaluated the 7 nodes that was produced as output, for a total of ten\nnodes processed.\n\nIn the second you've evaluated/differentatiated 3 nodes in one go without the\nintermediate step of having to evaluate a much larger tree.\n\nIn a large example, the number of nodes in the differentiated output quickly\nexplodes and subsequent evaluation would increase rapidly in cost.\n\n~~~\namelius\nFrom an asymptotic complexity viewpoint, I don't see any difference between\nthe two algorithms (AD versus building an expression tree and doing it\nsymbolically, then evaluating). Both are linear in the \"size\" of the\nexpression. So I don't understand what you mean by \"quickly explodes\".\n\n~~~\ntome\nSee here:\n\n[http://h2.jaguarpaw.co.uk/posts/why-is-naive-symbolic-\ndiffer...](http://h2.jaguarpaw.co.uk/posts/why-is-naive-symbolic-\ndifferentiation-slow/)\n\nIn summary, yes both functions are linear, but the size of the symbolic\nderivative is quadratic.\n\n------\nkazinator\nI would swear I read some Lisp-related paper about this with some nice Lisp\nadvocacy in it, too.\n\nAha, here it is:\n[http://www.cs.berkeley.edu/~fateman/papers/ADIL.pdf](http://www.cs.berkeley.edu/~fateman/papers/ADIL.pdf)\n\n> _For fans of Lisp, there is no question that one motivation is to show how\n> easy it is to implement in Lisp. Lisp provides a natural representation for\n> programs as data and a natural form for writing programs that write\n> programs, which is what we do in ADIL. The code is short, and is in ANSI\n> standard Common Lisp. Since it is not \"naive\" code written in just the\n> obvious idioms of introductory Lisp, it illustrates, for those with only a\n> cursory familiarity, that Lisp is more than CAR and CDR. In fact we did not\n> use those routines at all._\n\nAh, but you did use A and D. cAr-tomatic cDr-ivation!\n\n:)\n\n------\ndavexunit\nSurprised to not see SICP's excellent section on symbolic differentiation not\nmentioned here:\n[http://sarabander.github.io/sicp/html/2_002e3.xhtml#g_t2_002...](http://sarabander.github.io/sicp/html/2_002e3.xhtml#g_t2_002e3_002e2)\n\n------\n33a\nAlgebraically, automatic differentiation is the same as adding a nilpotent\nelement e, such that e^2=0 to your algebra. You can continue this pattern out\nto get higher order derivatives. For example, if you also add an element f\nwhere f^3=0, the coefficient of f is proportional to the second derivative.\n\n~~~\namelius\nSounds interesting. But could you explain this so that people without a\npostgraduate degree in mathematics can understand this?\n\n~~~\neru\nThe idea might be similar to [https://en.wikipedia.org/wiki/Non-\nstandard_analysis](https://en.wikipedia.org/wiki/Non-standard_analysis)\n\nEdit: Actually,\n[https://en.wikipedia.org/wiki/Smooth_infinitesimal_analysis](https://en.wikipedia.org/wiki/Smooth_infinitesimal_analysis)\nseems much closer.\n\n~~~\namelius\nInteresting. But...\n\n> by denying the law of the excluded middle, e.g., NOT (a \u2260 b) does not imply\n> a = b\n\nOuch, that is where my brain starts hurting.\n\n------\nturkishrevenge\nConal Elliot's paper on the subject is a really good starting point:\n[http://conal.net/papers/beautiful-\ndifferentiation/beautiful-...](http://conal.net/papers/beautiful-\ndifferentiation/beautiful-differentiation.pdf)\n\n------\nnwhitehead\nThis example makes it really clear what's going on. Could someone translate it\nto do reverse automatic differentiation? That's the one I never quite\nunderstand.\n\n~~~\nAnimats\nYou mean integration? That's much harder, but has been done automatically.\nSymbolic differentiation is easy, because you can just keep blindly applying a\nset of rewrite rules until none of them apply. That process converges on a\nunique result. Symbolic integration doesn't converge in that way. More\nstrategy is required, and you're not guaranteed a closed form solution.\nMathematica has a good symbolic integrator.\n\n~~~\ncperciva\n_Symbolic integration doesn 't converge in that way. More strategy is\nrequired, and you're not guaranteed a closed form solution._\n\nHowever, if a closed-form solution exists which can be expressed in terms of\nthe operations + - * / exp log, then it is guaranteed to be found.\n\n------\neru\nGreat article! It might benefit from a comparison with Oleg's Typed tagless-\nfinal interpretations ([http://okmij.org/ftp/tagless-\nfinal/course/](http://okmij.org/ftp/tagless-final/course/)).\n\n------\njfoutz\nThe thing that's great about the typeclass approach, you can do anything you\nwant behind the implementation. you can numerically evaluate the expression,\nbut even cooler, you can recover the parse tree. I never could sort out how to\ndeal with 'if', because it's not typeclassed. if it was, boy could you do some\namazing stuff. partial differentiation, tree rewriting, with the LLVM stuff\nyou could runtime compile arbitrary functions. super neat trick.\n\n~~~\nmpweiher\nThat's also what's great about pure dynamic systems like Smalltalk. 'ifTrue:'\nis just a message-send, the class \"False\" doesn't evaluate the block\nparameter, the class \"True\" does. And yes, you can then recover the parse\ntree, for example the Gemstone OODB lets you use arbitrary blocks (closures)\nas queries, recovers the parse tree and then creates an optimized query from\nit. Quite neat.\n\n------\nfinin\nTHE LISP DIFFERENTIATION DEMONSTRATION PROGRAM, K. Kaling, Artificial\nIntelligence Project, RLE and MIT Computation Center, AI Memo 10, 1959.\n\n[https://archive.org/stream/bitsavers_mitaiaimAI_878286/AIM-0...](https://archive.org/stream/bitsavers_mitaiaimAI_878286/AIM-010_djvu.txt)\n\nftp://publications.ai.mit.edu/ai-publications/pdf/AIM-010.pdf\n\n------\ndeepnet\nCould Automatic Differentiation learn by applying learnable weights to the\ncompiled S-expression atoms - Backpropagating Errors applied with Stochastic\nGradient Descent ?\n\nA program would constrain a task with functional statements, which is then\ncompiled to weighted s-expressions which learn the specific task from training\ndata.\n\nA sort of Neural-net functional program hybrid.\n\n------\nnotthemessiah\nDual numbers are just a means of formalizing the properties of an epsilon (a\ntiny change in calculus), and are the means of preserving enough information\nto think of the function and its derivative at the same time. EG: (x + \u03b5)\u00b2 =\nx\u00b2 + 2\u03b5 + \u03b5\u00b2, but \u03b5\u00b2 = 0, so we get x\u00b2 + 2\u03b5, (a tiny change squared becomes an\ninsignificantly tiny change)\n\n------\nplatz\nForward-mode AD doesnt't really scale. Reverse kode AD is useful for the\nbackpropigation algo in machinelearning however\n\n~~~\nguest1539\nWhat part doesn't scale?\n\n~~~\nkxyvr\nI'm a little late to the party, but hopefully this'll explain.\n\nBasically, you have to be careful about what it means to scale or not scale.\nIf all you want is a derivative with respect to a single variable, forward\nmode scales just fine, great in fact. However, if you want the gradient, or\nthe derivative with respect to every variable, then the forward mode does not\nscale well at all with respect to the number of variables. Specifically,\nassume we have m variables. In order to calculate the derivative of an\nexpression with respect to 1 variable is 2 times the cost of a function\nevaluation, 2 * eval. In order to see this, it's easiest to note that we don't\nneed an expression tree for forward mode AD like the article uses. Really, we\ncan get away with just a tuple that contains the function evaluation as the\nfirst element and the partial derivative as the second element. Then, all of\nthe rules are basically the same as the article, but we're always doing one\noperation on the first element, whatever the function is, and a different\noperation on the second element for the partial derivative. This is twice to\nwork, so 2 * eval. Since we have m variables, this becomes 2 * m * eval. And,\nyes, memory layouts, fewer optimizations for algebraic data types compared to\nfloats, etc. mean that it's actually slower, but, honestly, it's pretty fast.\n\nThe reverse mode is different because it turns out that it can calculate the\nentire gradient, or all m partial derivatives, with 4 * eval cost. Note, this\nis independent of the number of variables. Proving this is a pain, so I can't\ngive a good explanation here. Realistically, source code transformation tools\nperform around 10-20 * eval. Operator overloading tools perform around 20-30 *\neval, so it's slower in practice, but pretty damn good.\n\nNow, unlike the forward mode, where we really only need a tuple to carry\ninformation, the reverse mode does require an expression tree. In order to\nunderstand why, it helps to note that the forward mode is really a directional\n(Gatteaux) derivative and the reverse mode is the total (Frechet) derivative.\nThis affects how the chain rule manifests. Specifically, the forward mode\nrepeatedly applies two rules\n\n(f o g)'(x) dx = f'(g(x)) g'(x) dx\n\n(f o (g,h))'(x) dx = f'(g(x),h(x)) (g'(x)dx,h'(x)dx)\n\nBasically, in the function evaluation, we do some operation g before f. In\norder to figure out the derivative, we also do the g derivative operation\nbefore the f derivative operation. The first rule is for unary operations like\nnegation and the the second rule is for binary operations like addition.\nAnyway, the reverse mode takes the Hilbert adjoint of this. Specifically:\n\n(f o g)'(x)^* = g'(x)^* f'(g(x))^*\n\n(f o (g,h))'(x)^* = [g'(x)^* h'(x)^* ]f'(g(x),h(x))^*\n\nWe care about the adjoint because of a trick from the Riesz representation\ntheorem. Specifically,\n\nf'(x)dx =\n\n(f'(x)dx)1 =\n\n<f'(x)dx,1> =\n\n<dx,f'(x)^* 1> =\n\n<dx,grad f(x)>\n\nwhere <.,.> denotes the inner product. Anyway, basically the gradient of f is\nthe adjoint of the total derivative of f applied to 1. Therefore, if we knew\nthe adjoint of a computation applied to 1, we'd get the gradient. In other\nwords, we can rewrite the chain rule above as\n\ngrad (f o g)(x) = g'(x)^* grad f(g(x))\n\ngrad (f o (g,h))(x) = [g'(x)^* h'(x)^* ]grad f(g(x),h(x))\n\nThat's the core of reverse mode AD. Note, many, if note most descriptions of\nreverse mode AD talk about doing the chain rule in reverse and then they add\ndual variables, etc. That may be a description that's helpful for some, but\nnot for me. In truth, it's just a bunch of adjoints applied two one and\nknowing the Riesz representation trick.\n\nNow, the reverse mode AD does require an expression tree to be kept. The\nreason for this is that the computation about did g before f. However, if we\nlook at the chain rule we have\n\ngrad (f o g)(x) = g'(x)^* grad f(g(x))\n\nThis means that in order to calculate the gradient of the composition, we need\nto know the gradient of f first even though we did the evaluation of g first.\nHowever, we need to know the evaluation of g in order to calculate the\ngradient of f. The way we resolve this is that we evaluate the functions in\norder, but keep an expression tree of what we did. This gives all of the g(x),\nf(g(x)), etc. Then, we run over that expression tree backward to calculate all\nof the gradients. Because we run over the expression tree backwards, we call\nthis the reverse mode.\n\nHow we run over the expression tree backwards is important and tricky to do\nright. The way that we can sort of see that we can do everything in 4 * eval\ncost is that the trick is not to create multiple vectors to store the gradient\nwhen running over the tree, but to have 1 vector and to update this vector\nwith the new derivative information when required. Basically, we're just\ninserting information in the right spots, which can be done efficiently. In\npractice, storing the expression tree in memory can be really expensive. For\nexample, imagine a for-loop that had 10 billion loops. That's a really long\nexpression tree to hold in memory. Now, source code transformation tools are\nreally clever and don't actually store all of those expressions in memory, but\njust run back the for loop, which is why they're more efficient. Operator\noverloading techniques (algebraic data types) can technically optimize this as\nwell by doing some interesting caching techniques. However, the overall idea\nis that it can be expensive and there are lots of ways to do this wrong, but\nalso lots of places to do things right and be creative.\n\nAs aside to a comment left above, back propagation is indeed just reverse mode\nAD combined with a nonglobally convergent version of steepest descent. I've\nnever seen a paper that worked this out, but it's something that's known\nwithin the AD community. Someone, someday, should really write that down.\n\nAnyway, that's probably a much too long response to your simple question. In\nshort, forward mode doesn't scale when calculating gradients because the cost\nis 2 * m * eval whereas the reverse mode can do it in 4 * eval. For a single\nvariable, or an entire directional derivative, the forward mode scales fine\nand in fact works better than the reverse mode for this case.\n\nEdit: This formatting is killing me. Hopefully, it all looks fine now.\n\n"],["\n\nIs the Solar System Really a Vortex? - swamp40\nhttp://www.universetoday.com/107322/is-the-solar-system-really-a-vortex/\n\n======\nswamp40\nIt's not often you get to hear an astrophysicist say \"...then we\u2019re all\nbuggered.\"\n\n"],["\nTheoretical Motivations for Deep Learning - rndn\nhttp://rinuboney.github.io/2015/10/18/theoretical-motivations-deep-learning.html\n======\nchriskanan\nThere is a recent 5 page theoretical paper on this topic that I thought was\npretty interesting, and it tackles both deep nets and recurrent nets:\n[http://arxiv.org/abs/1509.08101](http://arxiv.org/abs/1509.08101)\n\nHere is the abstract:\n\nThis note provides a family of classification problems, indexed by a positive\ninteger k, where all shallow networks with fewer than exponentially (in k)\nmany nodes exhibit error at least 1/6, whereas a deep network with 2 nodes in\neach of 2k layers achieves zero error, as does a recurrent network with 3\ndistinct nodes iterated k times. The proof is elementary, and the networks are\nstandard feedforward networks with ReLU (Rectified Linear Unit)\nnonlinearities.\n\n------\narcanus\n1) I am curious about learning more about the statement: \"Deep learning is a\nbranch of machine learning algorithms based on learning multiple levels of\nrepresentation. The multiple levels of representation corresponds to multiple\nlevels of abstraction. \"\n\nWhat evidence exists that the 'multiple levels of representation', which I\nunderstand to generally be multiple hidden layers of a neural network,\nactually correspond to 'levels of abstraction'?\n\n2) I'm further confused by, \"Deep learning is a kind of representation\nlearning in which there are multiple levels of features. These features are\nautomatically discovered and they are composed together in the various levels\nto produce the output. Each level represents abstract features that are\ndiscovered from the features represented in the previous level. \"\n\nThis implies to me that this is \"unsupervised learning\". Are deep learning\nnets all unsupervised? Most traditional neural nets are supervised.\n\n~~~\njoe_the_user\nThe whole presentation seems very hand-wavy, which I think is pretty much the\nlevel most motivational discussions of deep learning are at.\n\nI think the presentations by Yann Lecun and Leon Bottou are more interesting -\nand tend to involve more uncertainty and fewer pronouncements.\n\nsee:\n[https://news.ycombinator.com/item?id=9878047](https://news.ycombinator.com/item?id=9878047)\n\n~~~\narcanus\nThis was fascinating and greatly informative. As you said, the authors were\nnot afraid to show the real warts and bleeding edge, as a good scientist\nshould. Thanks for the link.\n\n------\ndnautics\nI wonder if \"lots of data\" is wrong. If I show you say twenty similar-looking\nChinese characters in one person's handwriting, and the same twenty in another\nperson's handwriting, you'll probably do a good job (though maybe not an easy\ntime) classifying them with very little data.\n\n~~~\nwebmasterraj\nBecause I've seen lots of other handwriting, even if in another language. I\nhave very strong priors.\n\nThe problem is that a computer comes in without knowing anything about\ntangential phenomenon. So it needs lots of data to catch up to me and my years\nof forming associative connections about other handwriting I've seen.\n\nIf I showed you alien (ie not human) handwritten samples, you'd probably\nstuggle too.\n\n------\nilurk\nWhat tools did you use to make those nice pictures?\n\n(didn't read it yet though, will do when I have time)\n\n------\nmemming\nNice. Very well organized.\n\n"],["\nApply HN:Programmable matter - YuriyZ\nGoal: creation of programmable matter, consisting of many microscopic particles (c-atoms). Which can be manipulated to create a user programmed 3d form.<p>Achievements. Verified experimentally:<p>- ways to connect c-atoms with each other;<p>- the movement of c-atom relative to other c-atoms.<p>The experiments were conducted with models of c-atoms in the macro scale. The size of c-atoms models was 3 * 4 cm.<p>Tasks:\n- development of software capable of managing an array of c-atoms;\n- repeating experiments in micro-scale with size of c-atoms - less than one millimeter.\n======\npjlegato\nWhat are the possible commercial applications of this technology?\n\nHow will your company make money from this?\n\n~~~\nYuriyZ\n\\- Programmable matter will replace 3d prototyping, which is now carried out\nby 3d printers. \\- Will be used in telecommunications. The effect of presence\n- Pario. \\- The technology will be used in medicine. The surgeon will be able\nto operate on the patient by manipulating programmable matter, which will be\nan enlarged, precise, copy of the operated area. \\- Toys (gadgets)\ntransformers. The company will make money by selling and renting devices from\nthe programmable matter.\n\n"],["\nAsk HN: Good Resources for Data Engineering - fargo\nI am looking for some example case studies/exercises in order to learn play with some libraries, is there a book or website you can recommend?\n======\niso1337\nkleppmann\u2019s book: designing data-intensive applications.\n\nIt\u2019s very well written, but maybe doesn\u2019t have as much in the way of\nexercises.\n\n~~~\nfargo\nThanks for the excellent recommendation, I have been through kleppmann's book\nand it's a must for anyone who wants to be serious about data engineering (or\nwhatever it's called these days). I am looking however for something more\npractical and less technical, maybe something like projecteuler or cracking\nthe cracking the coding interview but for data\n\n~~~\niso1337\nIMHO data eng is too niche and new for that kind of content. But I would love\nto see if there is anything out there like that.\n\nIs the goal here to get through system design interviews or something like\nthat? You can check out pramp.com if so.\n\nIf it\u2019s for learning, then reading some of the original Google papers behind a\nlot of the big data technologies has been very rewarding for me. You could try\nreimplementing the paxos algorithm for example.\n\n~~~\nfargo\nI am a bit rusty with spark and I have a practical interview where I will be\ngiven various datasets to extract insights from them.\n\n"],["\n\nAsk HN: What is happening on Mt. Gox right now? - ljd\n\nhttp://www.bitcoin.clarkmoody.com/\nI'm not sure if anyone has been watching but someone is buying 729.2489 BTC at 850. Which wouldn't be unusual if it wasn't following by an exact buy of 135 of 775 right after. This cycle has happened, with the exact same amounts for the past 30 minutes and I don't know what to make of it. It's just in a loop. The market won't move either way.<p>I know there is some kind of gaming going on, I just don't know what it is, yet. Any ideas?\n======\nwashedup\nI can confirm this. I have watched the cycle happen from ~877 to ~829 ten\ntimes now, with a bid size of 729 at 850 every single time. As soon as a price\naround 829 is filled, it shoots back up to 877. Each time the cycle lasts\nroughly 5 minutes.\n\n------\nChrisClark\nIt's a bug. Mt. Gox had the same repeating bug before.\n\nBasically, don't trade on Mt. Gox. It's not a good idea.\n\n"]]