“The number of parameters in a neural network model is actually increasing on the order of 10x year on year. This is an exponential that I’ve never seen before and it’s something that is incredibly fast and outpaces basically every technology transition I’ve ever seen… So 10x year on year means if we’re at 10 billion parameters today, we’ll be at 100 billion tomorrow,” he said. “Ten billion today maxes out what we can do on hardware. What does that mean?”–Naveen Rao, Intel
-
Join 1,442 other subscribers
Categories
Archives
Twitter Feed
- Transatlantic TV, by @GilPress open.substack.com/pub/gilpress/p… gPressed 10 hours ago
- This Day in Data, AI, and Learning: Transatlantic TV gilpress.substack.com/p/transatlanti… gPressed 13 hours ago
- RT @YouSearchEngine: People want answers, not blue links. YouChat vs #Google , #Bing and #ChatGPT Only YouChat gives actual answers. S… gPressed 1 day ago
- RT @sundarpichai: 1/ In 2021, we shared next-gen language + conversation capabilities powered by our Language Model for Dialogue Applicatio… gPressed 1 day ago
- RT @Forbes: How ChatGPT, Bard And AI Rivals Are Shaping Layoffs And Hiring trib.al/RXhvysm https://t.co/bdKQZDmiHk gPressed 1 day ago