In Mountain View I watched someone from Facebook zoom into a poor neighborhood in another country on a screen and show me how many devices that particular home has connected to the Internet. In Boston, at an engineering school, I gave two talks and heard from 20-year-old engineering students wondering what to make in the bleakness of the future. In Malawi, a country where a third of the children are stunted, where there’s been drought for two years and when the rains come so do the floods, I heard from a Malawian who wants Trump to deport ‘Africans back to Africa’ to quench the brain drain. An aid worker I couldn’t get away from on a weekend trip blamed Malawians for being lazy and “submissive” and screamed at people doing their job for doing their job because their noise disturbed her. Two British motorcyclists biking from Cairo to Cape Town said the only people who have been rude were two other Brits who ignored them at a gas station - everyone else was kind. In Turkey, a Syrian from Aleppo sold me Turkish delight. Another showed me his electrocution scars. I’ve never heard a rich person say “we’re all human”, but the poor say it frequently, often as a reminder - perhaps in general, to me as an American, or to themselves, as an inquisition - that we don’t forget, because we forget. Demand the universal.
Generating Videos with Scene Dynamics
Proof of concept computer science research from Carl Vondrick, Hamed Pirsiavash and Antonio Torralba can generate video content from a single input image, based on neural networked trained data:
We capitalize on large amounts of unlabeled video in order to learn a model of scene dynamics for both video recognition tasks (e.g. action classification) and video generation tasks (e.g. future prediction). We propose a generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene’s foreground from the background. Experiments suggest this model can generate tiny videos up to a second at full frame rate better than simple baselines, and we show its utility at predicting plausible futures of static images. Moreover, experiments and visualizations show the model internally learns useful features for recognizing actions with minimal supervision, suggesting scene dynamics are a promising signal for representation learning. We believe generative video models can impact many applications in video understanding and simulation.
I want to watch this in a cinema.
Character: Evie Frye from Assassin’s Creed Syndicate
Cosplayer: Me! https://www.facebook.com/mightymouseanj
Photographer: https://www.facebook.com/CosplayAtsume/ / @ladyauroracosplaySan Francisco Comic Con was the debut of my hardest and most time consuming cosplay to date. Took about 8 months off and on, but powered through it and learned a lot along the way. Also, I got to see Janet Varney (Korra) again and meet Grey Delisle (Azula)



