Interesting Content in AI, Software, Business, and Tech- 06/26/2024 [Updates]
Content to help you keep up with Machine Learning, Deep Learning, Data Science, Software Engineering, Finance, Business, and more
Hey, it’s Devansh 👋👋
In issues of Updates, I will share interesting content I came across. While the focus will be on AI and Tech, the ideas might range from business, philosophy, ethics, and much more. The goal is to share interesting content with y’all so that you can get a peek behind the scenes into my research process.
I put a lot of effort into creating work that is informative, useful, and independent from undue influence. If you’d like to support my writing, please consider becoming a paid subscriber to this newsletter. Doing so helps me put more effort into writing/research, reach more people, and supports my crippling chocolate milk addiction. Help me democratize the most important ideas in AI Research and Engineering to over 100K readers weekly.
PS- We follow a “pay what you can” model, which allows you to support within your means. Check out this post for more details and to find a plan that works for you.
A lot of people reach out to me for reading recommendations. I figured I’d start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week. Some will be technical, others not really. I will add whatever content I found really informative (and I remembered throughout the week). These won’t always be the most recent publications- just the ones I’m paying attention to this week. Without further ado, here are interesting readings/viewings for 06/26/2024. If you missed last week’s readings, you can find it here.
Reminder- We started an AI Made Simple Subreddit. Come join us over here- https://www.reddit.com/r/AIMadeSimple/. If you’d like to stay on top of community events and updates, join the discord for our cult here: https://discord.com/invite/EgrVtXSjYf. Lastly, if you’d like to get involved in our many fun discussions, you should join the Substack Group Chat Over here:
.
Community Spotlight:
I’ve decided that I want to spend more time going over organizations solve ML Engineering problems, to add some more diversity to our currently research heavy focus. While studying important, up-coming, and new ideas is extremely important- it’s important to remember that a lot of ML Engineering is built from the combination of relatively simple ideas, combined in different ways. If you/your team have solved a problem that you’d like to share with the rest of the world, shoot me a message and let’s go over the details.
If you’re doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me. There are no rules- you could talk about a paper you’ve written, an interesting project you’ve worked on, some personal challenge you’re working on, ask me to promote your company/product, or anything else you consider important. The goal is to get to know you better, and possibly connect you with interesting people in our chocolate milk cult. No costs/obligations are attached.
Previews
Curious about what articles I’m working on? Here are the previews for the next planned articles-
How To Make Less Dumb Mistakes When Programming (inspired by this video on adding rigor to programming).
The use of AI in Bio-Tech and drug-discovery space. We’ll be covering a very interesting organization I came across recently. If any of you have any insight into the drug discovery domain, I would love to talk to you to get additional context.
Highly Recommended
These are pieces that I feel are particularly well done. If you don’t have much time, make sure you at least catch these works.
Heuristics on the high seas: Mathematical optimization for cargo ships
A great share by the always insightful Barak Epstein . Shipping makes the world go round, so any optimizations done there are worth paying attention to.
Look around you. Chances are that something in your line of sight sailed on a cargo ship. 90% of the world’s goods travel over the ocean, often on cargo vessels mammoth in scale: a quarter mile long, weighing 250,000 tons, holding 12,000 containers of goods collectively worth a billion dollars. Unlike airplanes, trains, and trucks, cargo ships are in nearly constant operation, following cyclical routes across oceans.
But, what are the best, most efficient routes for these ships? To a computer scientist, this is a graph theory problem; to a business analyst, a supply chain problem. Done poorly, containers linger at ports, ships idle offshore unable to berth, and ultimately, products become pricier as the flow of physical items becomes slower and unpredictable. Every container shipping company needs to solve these challenges, but they are typically solved separately. Combining them multiplies the complexity, and, to the best of our knowledge, is a problem that has never been solved at the scale required by the largest container operations (500 vessels and 1500 ports).
Google’s Operations Research team is proud to announce the Shipping Network Design API, which implements a new solution to this problem. Our approach scales better, enabling solutions to world-scale supply chain problems, while being faster than any known previous attempts. It is able to double the profit of a container shipper, deliver 13% more containers, and do so with 15% fewer vessels. Read on to see how we did it.
Distributed constrained combinatorial optimization leveraging hypergraph neural networks
Scalable addressing of high-dimensional constrained combinatorial optimization problems is a challenge that arises in several science and engineering disciplines. Recent work introduced novel applications of graph neural networks for solving quadratic-cost combinatorial optimization problems. However, effective utilization of models such as graph neural networks to address general problems with higher-order constraints is an unresolved challenge. This paper presents a framework, HypOp, that advances the state of the art for solving combinatorial optimization problems in several aspects: (1) it generalizes the prior results to higher-order constrained problems with arbitrary cost functions by leveraging hypergraph neural networks; (2) it enables scalability to larger problems by introducing a new distributed and parallel training architecture; (3) it demonstrates generalizability across different problem formulations by transferring knowledge within the same hypergraph; (4) it substantially boosts the solution accuracy compared with the prior art by suggesting a fine-tuning step using simulated annealing; and (5) it shows remarkable progress on numerous benchmark examples, including hypergraph MaxCut, satisfiability and resource allocation problems, with notable run-time improvements using a combination of fine-tuning and distributed training techniques. We showcase the application of HypOp in scientific discovery by solving a hypergraph MaxCut problem on a National Drug Code drug-substance hypergraph. Through extensive experimentation on various optimization problems, HypOp demonstrates superiority over existing unsupervised-learning-based solvers and generic optimization methods.
Simulating 500 million years of evolution with a language model
A little bit skeptical about some of the things here, but still extremely exciting stuff. Can’t want to study more.
More than three billion years of evolution have produced an image of biology encoded into the space of natural proteins. Here we show that language models trained on tokens generated by evolution can act as evolutionary simulators to generate functional proteins that are far away from known proteins. We present ESM3, a frontier multimodal generative language model that reasons over the sequence, structure, and function of proteins. ESM3 can follow complex prompts combining its modalities and is highly responsive to biological alignment. We have prompted ESM3 to generate fluorescent proteins with a chain of thought. Among the generations that we synthesized, we found a bright fluorescent protein at far distance (58% identity) from known fluorescent proteins. Similarly distant natural fluorescent proteins are separated by over five hundred million years of evolution
We have a strong bio-tech focus this week b/c of all my reading into that space. And when it comes to TechBio/Bio-Tech (what is the difference), Marina T Alamanou, PhD is on the GOAT list for my favorite resources.
Why AI Companies are investing into DevRel
This is a piece we did on my sister publication, Tech Made Simple, on the rise of AI-Relations roles. Figured I’d weigh on this industry trend and speculate on how one might break into it.
Recently, I’ve noticed a very interesting trend in the messages I get on LinkedIn. A lot of the recent messages I get from recruiters come from people looking to hire me in an AI Relations Role-
The ratio of people who reach out to me for AIRel vs ML roles has gone up significantly over the last 2–3 months. AIRel is a very new idea, being itself a spin-off from the relatively new Developer Relations (DevRel) title. For those of you looking to either pivot, or break into AI- it might be an interesting alternative to the traditional Engineer, PM, or Researcher roles. In this piece, I will give you more detail on AI Relations role- including its background, why it’s so valuable to startups, and how you can break in. I think the relations path is worth exploring b/c it is very valuable across the board, and you will often find startups aggressively hiring for it. This can help you a lot in uncertain economic times.
Given all the talk about the coming of AGI, this paper is a good reality-check. Maybe Ilya left his job for nothing?
Large Language Models (LLMs) like closed weights ones GPT-3.5/4, Claude, Gemini or open weights ones like LLaMa 2/3, Mistral, Mixtral, and more recent ones Dbrx or Command R+ are often described as being instances of foundation models — that is, models that transfer strongly across various tasks and conditions in few-show or zero-shot manner, while exhibiting scaling laws that predict function improvement when increasing the pre-training scale. These claims of excelling in different functions and tasks rely on measurements taken across various sets of standardized benchmarks showing high scores for such models. We demonstrate here a dramatic breakdown of function and reasoning capabilities of state-of-theart models trained at the largest available scales which claim strong function, using a simple, short, conventional common sense problem formulated in concise natural language, easily solvable by humans. The breakdown is dramatic, as models also express strong overconfidence in their wrong solutions, while providing often non-sensical “reasoning”-like explanations akin to confabulations to justify and backup the validity of their clearly failed responses, making them sound plausible. Various standard interventions in an attempt to get the right solution, like various type of enhanced prompting, or urging the models to reconsider the wrong solutions again by multi step re-evaluation, fail. We take these initial observations to the scientific and technological community to stimulate urgent re-assessment of the claimed capabilities of current generation of LLMs, Such re-assessment also requires common action to create standardized benchmarks that would allow proper detection of such basic reasoning deficits that obviously manage to remain undiscovered by current state-of-the-art evaluation procedures and benchmarks
Scaling Instructable Agents Across Many Simulated Worlds
The results are okay, but this is an interesting idea by the crew at Deepmind. Can’t wait to never hear about it again (outside of maybe research discussions).
Building embodied AI systems that can follow arbitrary language instructions in any 3D environment is a key challenge for creating general AI. Accomplishing this goal requires learning to ground language in perception and embodied actions, in order to accomplish complex tasks. The Scalable, Instructable, Multiworld Agent (SIMA) project tackles this by training agents to follow free-form instructions across a diverse range of virtual 3D environments, including curated research environments as well as openended, commercial video games. Our goal is to develop an instructable agent that can accomplish anything a human can do in any simulated 3D environment. Our approach focuses on language-driven generality while imposing minimal assumptions. Our agents interact with environments in real-time using a generic, human-like interface: the inputs are image observations and language instructions and the outputs are keyboard-and-mouse actions. This general approach is challenging, but it allows agents to ground language across many visually complex and semantically rich environments while also allowing us to readily run agents in new environments. In this paper we describe our motivation and goal, the initial progress we have made, and promising preliminary results on several diverse research environments and a variety of commercial video games.
Tech Bros Invented Trains And It Broke Me
It’s a bit weird how so many so-called revolutionary transport methods end up being worse versions of trains. I wonder why people don’t just skip all the additional steps and invest in more trains and buses.
Building a Perplexity AI clone
A great hands-on exploration by our man Alejandro Piad Morffis. If you’re looking for some clear and knowledgable analysis- he’s always a very good resource to turn to.
How Python Compares Floats and Ints: When Equals Isn’t Really Equal
If you want in-depth analysis of the coding systems- Abhinav Upadhyay is your guy. Every single article is like a chapter in a very advanced textbook.
Python compares the integer value against the double precision representation of the float, which may involve a loss of precision, causing these discrepancies. This article goes deep into the details of how CPython performs these comparisons, providing a perfect opportunity to explore these complexities.
So this is what we will cover:
Quick revision of the IEEE-754 double precision format — this is how floating-point numbers are represented in memory
Analyzing the IEEE-754 representation of the three numbers
The CPython algorithm for comparing floats and ints
Analyzing the three test scenarios in the context of the CPython algorithm
FOD#52: OpenAI’s new GPT-4o — what it can and cannot do
Meant to share this earlier, but things happened and I couldn’t. Regardless, another masterpiece by Ksenia Se .
trying out the new GPT-4o and — as always — providing you with the most relevant news, research papers and must-reads
Other Good Content
Japan Spent 60 Billion Dollars Defending The Yen!
Over a four-day period Japan is suspected to have carried out two interventions to support the yen at an estimated cost of $59 billion dollars. The first intervention came after the yen fell below 160 to the dollar for the first time in 34 years. The second intervention came a few days later after Jerome Powell announced that a rate hike was unlikely to be the Fed’s next interest-rate move. The simplest explanation for the declining yen is that it is entirely driven by Japanese interest rates being low relative to other developed markets. People take their money out of the yen which is yielding 0 and put it in dollar denominated bonds to earn 5% — leading to a decline in the yen, but my friend Manoj Pradhan at Talking Heads Macro argues that this is a lazy oversimplification and that the Yen and Japanese markets are possibly the most interesting story in macroeconomics today.
Promises and pitfalls of artificial intelligence for legal applications
Is AI set to redefine the legal profession? We argue that this claim is not supported by the current evidence.We dive into AI’s increasingly prevalent roles in three types of legal tasks: information processing, tasksinvolving creativity, reasoning, or judgment, and predictions about the future. We find that the ease ofevaluating legal applications varies greatly across legal tasks based on the ease of identifying correctanswers and the observability of information relevant to the task at hand. Tasks that would lead to themost significant changes to the legal professional are not only harder to evaluate; they are also most proneto overoptimism about AI capabilities. We make recommendations for better evaluation and deploymentof AI in legal contexts.
781: Ensuring Successful Enterprise AI Deployments — with Sol Rashidi
I’m normally not a huge fan of career/AI podcasts, but Jon Krohn always does a great job.
Explore successful enterprise AI with JonKrohnLearns and Sol Rashidi, celebrated C-suite data leader and author of “Your AI Survival Guide,” as they unpack the intricacies of AI project success and the persistent issue of high turnover among executives. Discover Sol’s unique strategies and insights gleaned from leading Fortune 100 companies. Perfect for those looking to enhance their leadership skills and understanding of AI in business.
A very approachable illustration of how every situation tends to have lots of trade-offs by Andrew Smith
True Facts: Pigeons Are Tricking You
Turns out Pigeons are much cooler than I realized.
If you liked this article and wish to share it, please refer to the following guidelines.
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819