Interesting Content in AI, Software, Business, and Tech- 9/13/2023[Updates]
Content to help you keep up with Machine Learning, Deep Learning, Data Science, Software Engineering, Finance, Business, and more
Hey, it’s Devansh 👋👋
In issues of Updates, I will share interesting content I came across. While the focus will be on AI and Tech, the ideas might range from business, philosophy, ethics, and much more. The goal is to share interesting content with y’all so that you can get a peek behind the scenes into my research process.
If you’d like to support my writing, consider becoming a premium subscriber to my sister publication Tech Made Simple to support my crippling chocolate milk addiction. Use the button below for a discount.
p.s. you can learn more about the paid plan here. If your company is looking for software/tech consulting- my company is open to helping more clients. We help with everything- from staffing to consulting, all the way to end to website/application development. Message me using LinkedIn, by replying to this email, or on the social media links at the end of the article to discuss your needs and see if you’d be a good match.
We're in the top 10 of all Substack Tech Publications. Thank you all for your support <3. Rest assured, I will be chopping up crystals of frozen chocolate milk and snorting them in the world's greatest AI themed bender. Let's get into the piece.
A lot of people reach out to me for reading recommendations. I figured I'd start sharing whatever AI Papers/Publications, interesting books, videos, etc I came across each week. Some will be technical, others not really. I will add whatever content I found really informative (and I remembered throughout the week). These won't always be the most recent publications- just the ones I'm paying attention to this week. Without further ado, here are interesting readings/viewings for 9/13/2023. If you missed last week's readings, you can find it here.
Community Spotlight- Pallet
This week, I want to take this opportunity to let you know about a wonderful opportunity. I'm bringing a new initiative into the community: a talent service, where I'll be working with cool companies who are looking to hire top talent (this will be done in collab with Pallet).
When a role comes in that I think you'd be a good fit for, I'll reach out and see if you're interested -- if you are, I'll introduce you to the hiring manager, and if not no worries, you can just ignore!
On the other hand, if you or your company is hiring, respond to this email and I can see if I can help -- as you know the AI Made Simple audience is filled with amazing, talented people!
If you're doing interesting work and would like to be featured in the spotlight section, just drop your introduction in the comments/by reaching out to me. There are no rules- you could talk about a paper you've written, an interesting project you've worked on, some personal challenge you're working on, ask me to promote your company/product, or anything else you consider important. The goal is to get to know you better, and possibly connect you with interesting people in our chocolate milk cult. No costs/obligations are attached.
Highly Recommended
These are pieces that I feel are particularly well done. If you don't have much time, make sure you atleast catch these works.
Did GPT-4 Hire And Then Lie To a Task Rabbit Worker to Solve a CAPTCHA?
A great investigation into one of social media's most sensational stories about AI. Particularly illuminating is how the system card itself seems to contribute to this hype. 'Scientific Publications' themselves are chasing hype, as we covered here. In general, Melanie Mitchell is a great resource for unbiased and straightforward analysis of AI.
Sounds a bit scary, no? Indeed it sounds like GPT-4 has a lot of agency and ingenuity—that it can indeed hire a human worker to solve a CAPTCHA (an online puzzle that proves a user is human), and can figure out how to lie to convince the human to carry out the task.
This is how the experiment was widely covered in the media.
But what really happened?
There are more details in a longer report by ARC that show that GPT-4 had a lot less agency and ingenuity than the system card and media reporting imply.
A Principal Odor Map Unifies Diverse Tasks in Human Olfactory Perception
A very interesting technique for multi-modality.
Mapping molecular structure to odor perception is a key challenge in olfaction. Here, we use graph neural networks (GNN) to generate a Principal Odor Map (POM) that preserves perceptual relationships and enables odor quality prediction for novel odorants. The model is as reliable as a human in describing odor quality: on a prospective validation set of 400 novel odorants, the model-generated odor profile more closely matched the trained panel mean (n=15) than did the median panelist. Applying simple, interpretable, theoretically-rooted transformations, the POM outperformed chemoinformatic models on several other odor prediction tasks, indicating that the POM successfully encoded a generalized map of structure-odor relationships. This approach broadly enables odor prediction and paves the way toward digitizing odors.
The Brainwashing Of Americas Children | Climate Town
A great and detailed look at how fossil fuel companies have been brainwashing children and manipulating curriculums in schools for decades. While this one is specific to America, companies like Nestle have been doing something similar in another parts of the world for years as well.
If Nobody Can Afford A Home... Whos Going To Buy Them?
A fascinating video about the growing housing prices, and how the market might look as houses continue to become more and more unaffordable.
Fewer young people than ever before are going to be able to buy a house in their lifetime, and at the same time homes are getting more expensive every year… But if nobody can afford to buy a house then how do they keep getting more expensive? A family home should be the bedrock of your financial life, it gives you somewhere to live while building up equity in a place that you can one day call your own. Owning a home with a thirty-year mortgage is cheaper out of pocket every month in most cities than renting, so if you can buy your own home, you will be richer now and richer in the future. You already know this, but if you are in the seventy-five [75%] of my audience that doesn’t own a home it’s probably because you can’t afford one. According to a report by Redfin only twenty one percent [21%] of homes that went on sale in 2022 were considered affordable, that’s down for SIXTY percent [60%] of homes on sale in 2021. That means in just one year two thirds of ALL affordable housing became too expensive for the average American.
This reel about how Strippers can predict recessions
Apparently, the money being spent at Strip Clubs is a great economic indicator for upcoming recessions. Economists even have a term for it, dubbed the stripper index. And people say economics is boring. From what I understand, this works well because the disposable cash (used at the clubs) is a much stronger indication of the actual cash on hand for people as opposed to metrics like GDP and Stock Market Indexes (which can often be divorced from reality)
AI Papers/Videos
Retentive Network: A Successor to Transformer for Large Language Models (Paper Explained)
I haven't personally seen this, but Yannic has the best paper breakdowns out there (if you like going into the technical details). I'm going to watch it tonight, once I wrap everything up.
Retention is an alternative to Attention in Transformers that can both be written in a parallel and in a recurrent fashion. This means the architecture achieves training parallelism while maintaining low-cost inference. Experiments in the paper look very promising.
Can Large Language Models Reason?
What should we believe about the reasoning abilities of today’s large language models? As the headlines above illustrate, there’s a debate raging over whether these enormous pre-trained neural networks have achieved humanlike reasoning abilities, or whether their skills are in fact “a mirage.”
Reasoning is a central aspect of human intelligence, and robust domain-independent reasoning abilities have long been a key goal for AI systems. While large language models (LLMs) are not explicitly trained to reason, they have exhibited “emergent” behaviors that sometimes look like reasoning. But are these behaviors actually driven by true abstract reasoning abilities, or by some other less robust and generalizable mechanism—for example, by memorizing their training data and later matching patterns in a given problem to those found in training data?
Certifying LLM Safety against Adversarial Prompting
Large language models (LLMs) released for public use incorporate guardrails to ensure their output is safe, often referred to as "model alignment." An aligned language model should decline a user's request to produce harmful content. However, such safety measures are vulnerable to adversarial prompts, which contain maliciously designed token sequences to circumvent the model's safety guards and cause it to produce harmful content. In this work, we introduce erase-and-check, the first framework to defend against adversarial prompts with verifiable safety guarantees. We erase tokens individually and inspect the resulting subsequences using a safety filter. Our procedure labels the input prompt as harmful if any subsequences or the input prompt are detected as harmful by the filter. This guarantees that any adversarial modification of a harmful prompt up to a certain size is also labeled harmful. We defend against three attack modes: i) adversarial suffix, which appends an adversarial sequence at the end of the prompt; ii) adversarial insertion, where the adversarial sequence is inserted anywhere in the middle of the prompt; and iii) adversarial infusion, where adversarial tokens are inserted at arbitrary positions in the prompt, not necessarily as a contiguous block. Empirical results demonstrate that our technique obtains strong certified safety guarantees on harmful prompts while maintaining good performance on safe prompts. For example, against adversarial suffixes of length 20, it certifiably detects 93% of the harmful prompts and labels 94% of the safe prompts as safe using the open source language model Llama 2 as the safety filter.
TSMixer: An All-MLP Architecture for Time Series Forecasting
Real-world time-series datasets are often multivariate with complex dynamics. To capture this complexity, high capacity architectures like recurrent- or attention-based sequential deep learning models have become popular. However, recent work demonstrates that simple univariate linear models can outperform such deep learning models on several commonly used academic benchmarks. Extending them, in this paper, we investigate the capabilities of linear models for time-series forecasting and present Time-Series Mixer (TSMixer), a novel architecture designed by stacking multi-layer perceptrons (MLPs). TSMixer is based on mixing operations along both the time and feature dimensions to extract information efficiently. On popular academic benchmarks, the simple-to-implement TSMixer is comparable to specialized state-of-the-art models that leverage the inductive biases of specific benchmarks. On the challenging and large scale M5 benchmark, a real-world retail dataset, TSMixer demonstrates superior performance compared to the state-of-the-art alternatives. Our results underline the importance of efficiently utilizing cross-variate and auxiliary information for improving the performance of time series forecasting. We present various analyses to shed light into the capabilities of TSMixer. The design paradigms utilized in TSMixer are expected to open new horizons for deep learning-based time series forecasting. The implementation is available at this https URL
Cool Videos
How AoE2 is helping scientists understand ants
Age of Empires is still the greatest video game ever made (even though the game is older than I am). It was pretty cool to see it being used in a respectable paper for science. One interesting takeaway was that human activity destroys natural landscapes, creating environments that favor the smaller invasive species as opposed to bigger local species.
In this video we'll look at an academic study which used Age of Empires 2 to simulate battles between ants and explore Lanchester's Square Law.
Do Complex Numbers Exist?
Found a great YouTube channel delivering Math content in a beautiful deadpan delivery and simple language. I'm in love <3.
Do complex number exist or are they just a convenient, mathematical tool that we use in science? With the exception of quantum mechanics, it is easy to get rid of complex numbers. But can you do quantum mechanics without complex numbers? A recent paper says no, you can't.
Why Are Billionaires Obsessed With Space? | Economics Explained
With billionaires like Elon Musk and Jeff Bezos setting their sights on space exploration and development, there must be some commercially viable industries in space and on Mars, right? Well, it's a little more complicated than just mining the asteroids that contain trillions of dollars of value, or building heavy things in micro-gravity, and certainly a base on Mars might cost more than it's worth. Making Mars a viable economy might not even be possible.
If you liked this article and wish to share it, please refer to the following guidelines.
I'll catch y'all with more of these next week. In the meanwhile, if you'd like to find me, here are my social links-
Reach out to me
Use the links below to check out my other content, learn more about tutoring, reach out to me about projects, or just to say hi.
Small Snippets about Tech, AI and Machine Learning over here
AI Newsletter- https://artificialintelligencemadesimple.substack.com/
My grandma’s favorite Tech Newsletter- https://codinginterviewsmadesimple.substack.com/
Check out my other articles on Medium. : https://rb.gy/zn1aiu
My YouTube: https://rb.gy/88iwdd
Reach out to me on LinkedIn. Let’s connect: https://rb.gy/m5ok2y
My Instagram: https://rb.gy/gmvuy9
My Twitter: https://twitter.com/Machine01776819
Amazing, congrats for cracking top 10 in such a short amount of time! It's incredible!
Dev, congrats on getting into the top ten! Very cool.
Today I learned about the stripper index.