How to AI innovate at the speed of Light with Lightmatter CEO Nick Harris
For decades, Moore's Law and Dennard scaling drove the advancement of computers, making them faster, smaller, and more energy-efficient. However, as Nicholas Harris explains, this era is coming to an end: transistor-based technology is approaching its physical limits.
The future of computing demands a new solution, and that solution is light. By harnessing light to process and transmit data, Lightmatter is enabling a faster, more efficient, and scalable approach—laying the foundation for the next generation of computing.
On the latest episode of For Starters, I sat down with Nicholas Harris, Co-founder & CEO of Lightmatter, to explore how his company is transforming AI infrastructure and unlocking the future of computing.
For 60 years, Moore's Law and Dennard scaling drove the exponential advancement of computers, making them faster, smaller, and more energy-efficient. But as Nick Harris explains, the era of transistor-based technology has reached its physical limits. The future of computing demands a new solution, and that solution is light. Nick founded Lightmatter in 2017 to fully transform AI data center infrastructure. By harnessing light to process and transmit data, Lightmatter is laying the foundation for the next generation of computing. The company invented the world's first 3D-stacked photonics engine and was most recently valued at $4.4 billion. In this episode, Nick shares why he chose to leave academia after receiving his PhD, how his go-to-market strategy hinged on building real friendships and understanding his customers' internal roadmaps, and why he believes that having an obsession is a gift.
Here are 5 takeaways
Go-to-Market:
Nick underscores the importance of staying deeply connected to your target market—in Lightmatter’s case, hyperscalers developing AI compute units and semiconductor companies. Consistently sharing progress, understanding their roadmaps, and listening to customer needs ensures your product aligns with real-world demand. In Nick’s words, “business is about people.”
Academia vs. Entrepreneurship:
Nick’s PhD experience at MIT laid the ideal groundwork for entrepreneurship, teaching him to navigate uncertainty and articulate complex ideas with clarity—skills that proved essential for securing funding and attracting top talent. His eventual decision to leave academia was driven by his desire to maximize impact. Scaling his vision required a team, funding, and infrastructure beyond what academia could provide.
The Future of Supercomputing & AI:
Tech giants like Microsoft, Amazon, Google, and Meta are pouring billions into building data centers so large they consume as much energy as entire cities. As energy demands skyrocket, the focus will turn to connectivity technology—systems capable of efficiently linking millions of GPUs so they can function as one unified supercomputer.
Preparing for the Future:
Nick believes preparation starts with experimentation. Just as early internet adopters unlocked opportunities no one saw coming, companies and individuals need to embrace AI now: test its capabilities, identify strengths and weaknesses, and uncover where it can automate or enhance workflows.
Obsession is a Gift:
Nick credits his success to an early and “irrational” obsession with computers. Nick spent his childhood immersed in books on programming and computer architecture. His takeaway? Not everyone is born with an obsession, and if you are, it’s a gift. When you find that deep curiosity, lean into it.
With over $850 million raised and a $4.4 billion valuation.
RoadSync's new executive team members John Brown as COO and Lester Rivera as CRO
RoadSync, an innovative digital financial platform for the logistics industry, today strengthens its executive leadership team with John Brown joining as Chief Operating Officer and Lester Rivera being promoted to Chief Revenue Officer.
John Brown joins as Chief Operating Officer, bringing 25 years of technology and operational leadership experience. Most recently, Brown served as COO of ParkMobile and previously held executive roles at Clutch Technologies and Cardlytics. As COO, Brown will oversee RoadSync's day-to-day operations, focusing on scaling the company's infrastructure and optimizing operational efficiency to support continued growth.
Lester Rivera has been promoted to Chief Revenue Officer following three years of delivering outstanding sales results at RoadSync as SVP of Sales. Previously, Rivera led inside sales at FLEETCOR for over 12 years. In his role as CRO, Rivera leads RoadSync's revenue strategy and execution, focusing on expanding market reach and accelerating adoption of RoadSync's solutions.
"The logistics industry needs modern financial solutions, and our strengthened leadership team will help deliver them," said Robin Gregg, CEO of RoadSync. "John's expertise in scaling operations and Lester's proven success in reaching complex markets will accelerate our impact across the industry."
These appointments reflect RoadSync's commitment to building a world-class team to support its expansion in the $800 billion logistics and transportation industry. For more information, visit www.roadsync.com.
About RoadSync
RoadSync is the digital financial platform for the logistics industry. By removing paper and phone calls from business transactions, RoadSync offers a fast, convenient, and secure way to move and manage money and conduct business, dramatically reducing payment processing time and maximizing revenue collection. RoadSync offers payment products for the entire supply chain – warehouses, trucks/carriers, repair/tow merchants, and brokers – integrating and automating the financial systems fueling the logistics industry. For more information, visit www.roadsync.com.
The World Economic Forums "Dawn of Artificial General Intelligence"meeting
You don't need to be in Davos to catch a fascinating #WEF25 discussion about the dawn of Artificial General Intelligence (AGI).
In a panel moderated by Nicholas Thompson, Goq AI CEO & Founder, Jonathan Ross and other leading AI experts explored the immense potential of AI alongside critical questions about its development and impact.
AI expert speakers included Andrew Ng, Yoshua Bengio, Nicholas Thompson, Yejin Choi, Jonathan Ross, Thomas Wolf. Discussed how artificial general intelligence could possess the versatility to reason, learn and innovate in any task. But with rising concerns about job losses, surveillance and deepfakes, will AGI be a force for progress or a threat to the very fabric of humanity?
This session was developed in collaboration with The Atlantic.
Centre for the Fourth Industrial Revolution
The Centre for the Fourth Industrial Revolution is advancing the application of human-centered and society-serving technologies.
Octane lending has record breaking $1.6 Billion 2024 originations up 36%
Company Exceeds $5 Billion in Aggregate Originations and Doubles RV Originations
- Octane originated more than $1.6 billion in 2024, a 36% increase year-over-year, doubled its RV originations, and surpassed $5 billion in aggregate originations
- Closed its Series E funding round, entered the marine market, and launched over 100 new products and enhancements, including Dealer Portal 2.0
- Expanded its relationships with Kawasaki and CFMOTO, and partnered with RideNow to launch RideNow Finance
- Closed four asset-backed securitizations, including its first ABS transaction wholly backed by RV and Marine collateral, and closed the year GAAP net income profitable
- Completed whole-loan sales and announced forward flow commitments of $1 billion
- Won notable awards, opened new headquarters, and made key personnel appointments
Octane® (Octane Lending, Inc.®), the fintech company revolutionizing the financing experience, announced its 2024 milestones on its journey to make buying better. The company had a record-breaking year, exceeding its originations targets, expanding its product offering and markets served, diversifying its capital markets strategy, and strengthening its leadership team.
Significant Momentum in Octane's Powersports and Recreational Vehicle (RV) Business
Octane saw considerable growth in both its powersports and RV business in 2024. The company grew originations through its in-house lender, Roadrunner Financial®, Inc., by 36 percent year-over-year, closing the year at more than $1.6 billion in 2024 originations. Octane also reached several milestones last year, surpassing $4 billion in aggregate originations in February and $5 billion in aggregate originations in September. The company continued to see strong growth in the RV market, doubling its RV originations for the second year in a row. At the same time, it expanded its relationships with key powersports original equipment manufacturers (OEMs), launching full-spectrum financing with both Kawasaki and CFMOTO.
Equity Financing Fuels Product Innovation and Expansion into New Markets
Octane raised $50 million in its Series E funding round, bringing its total equity funding raised since inception to $242 million. This investment fueled the company's entrance into the marine market with full-spectrum credit coverage. During the course of 2024, Octane introduced over 100 new products and product enhancements to make buying better for dealer partners and their customers, including Dealer Portal 2.0, a significantly upgraded version of its industry-leading dealer platform. Octane's soft-pull prequalification tools, Octane Prequal® and Prequal Flex®, continued to help consumers quickly and easily apply for financing with more than 1200 dealers using each tool, respectively; nearly 400,000 powersports applications were sent to partner dealers through the two digital tools. The company also partnered with RideNow, the largest powersports retailer in North America, to launch RideNow Finance, a private label partnership that drives more sales for RideNow Powersports Dealerships through digital tools, full-spectrum financing, and branded lifecycle marketing.
Strong Financial Performance Combined with Historic ABS Transaction Concludes a Successful Year in Capital Markets
Octane continued to diversify its capital markets strategy in 2024. The company completed four asset-backed securitizations (ABS), which were supported by over 100 investors. The first three transactions, OCTL 2024-1, OCTL 2024-2, and OCTL 2024-3, were backed by powersports collateral. The fourth transaction, OCTL 2024-RVM1, was Octane's first transaction backed wholly by RV and Marine collateral and the first deal of its type in two decades. In addition to completing more than $4 billion of ABS deals to date, other notable 2024 accomplishments include completing a $500 million forward-flow deal with funds managed by AB CarVal, announcing two separate whole loan sales totaling $280 million to Yieldstreet, and completing a $200 million whole loan sale to funds managed by AB CarVal. Octane maintains approximately $1 billion in revolving capacity. Additionally, the company realized strong financial performance in 2024; year-over-year Octane grew top line revenue by 46%, grew profit by 62% (for a gross profit margin of 41%), and increased GAAP net income by approximately 650%, all new record highs*.
Key Leadership Appointments and New Headquarters in Octane's 10th Year in Business
In 2024, Octane made several executive appointments, brought in key talent, and garnered recognition for its performance, culture, and leadership. Octane appointed its first President, Steven Fernald, who continues to serve as the company's Chief Financial Officer in addition to overseeing its Powersports and Outdoor Power Equipment (OPE) business, and announced the appointment of a new Chief Risk Officer, Mark Molnar, an executive with over thirty years of risk experience at both established banks and fintech companies.
Octane also strengthened its Sales and Engineering expertise by acquiring the team behind The Lasso, a platform that links car sellers directly to a network of hundreds of top-notch dealerships. Nate Milhalovich, formerly the CEO and co-founder of The Lasso, joined Octane as SVP of New Verticals, along with four talented engineers and salespeople. Octane will leverage Milhalovich and The Lasso team's technical skills, entrepreneurial spirit, and experience building pipelines and relationships to help Octane strengthen its offering and expand into new markets.
In 2024, Octane celebrated its tenth anniversary, opened a new headquarters in Manhattan, and won nearly 30 awards, including ranking on the Inc. 5000 list of the fastest growing private companies for the third year in a row, making the Technology Fast 500 awarded by Deloitte, LLP for the second year in a row, and being named one of the Best Places to Work in Dallas for the second year in a row. Octane's CEO and Co-founder, Jason Guss, continued to receive accolades for his leadership and vision, being named an Entrepreneur Of The Year® 2024 New York Award Finalist for the second time and receiving the Powersports Finance Achievement Award in Leadership.
"The past year was the most successful in Octane's ten-year history, made possible thanks to the creativity and effort of our team and the ongoing support of our esteemed dealer, OEM, and investor partners," said Guss. "We intend to build on this momentum in 2025, as we reaffirm our commitment to making buying better through a fast, seamless financing experience."
This press release does not constitute an offer to sell or the solicitation of an offer to buy nor shall there be any sale of these securities in any jurisdiction in which such offer, solicitation or sale would be unlawful prior to registration or qualification under the securities laws of such jurisdiction.
*Financial performance based on 2024 unaudited company financial statements.
About Octane:
Octane® is revolutionizing recreational purchases by delivering a seamless, end-to-end digital buying experience. We connect people with their passions by combining cutting-edge technology and innovative risk strategies to make lifestyle purchases–like powersports vehicles, RVs, boats, personal watercraft, and outdoor power equipment–fast, easy, and accessible.
Octane adds value throughout the customer journey: inspiring enthusiasts with the Octane Media™ editorial brands, including Cycle World® and UTV Driver®, instantly prequalifying consumers for financing online, routing customers to dealerships for an easy closing, and supporting customers throughout their loan with superior loan servicing.
Founded in 2014, we have approximately 40 OEM partner brands and 4,000 dealer partners, and a team of over 500 in remote and hybrid roles. Visit www.octane.co.
Octane®, Roadrunner Financial®, Octane Prequal®, and Prequal Flex® are registered service marks of Octane Lending, Inc.
Media Relations: Shannon O'Hara
Press@octane.co
Investor Relations:
IR@octane.co
Narrative revolutionizes AI model training with the launch of their Model Studio and Rosetta Stone 2.0
This is the ‘no-code’ collaborative data solution CROs, product leads, and other non-developers have been waiting for.
While artificial intelligence has democratized a wide range of creative tools for data and analytics, one key area has been left behind: AI model training. Narrative.io has been steadily bridging this gap.
Narrative unveiled a game-changing breakthrough in AI model training. This innovation empowers everyone—from chief revenue officers to product development teams—to train AI models, monetize data, and collaborate effortlessly with clients and partners on AI projects—without the need for complex coding, configuration, or infrastructure.
In simple terms, Narrative’s latest release transforms raw, disparate datasets into valuable training material using an easy point-and-click interface and robust backend technology.
The landscape of AI model development is about to change dramatically with Model Studio and Rosetta Stone 2.0.
These two groundbreaking tools simplify AI model training and data normalization for everyone—from product leads to technical teams.
With Model Studio, you can now train AI models with a simple point-and-click interface, no coding required. Rosetta Stone 2.0 takes data from diverse sources and automatically normalizes it, ensuring consistency without the usual manual effort.
Key benefits include:
- No-code LLM model fine-tuning for easy customization of AI models
- Seamless data integration from multiple sources so you can train on data you find from partners in the Narrative marketplace
- Enhanced data monetization opportunities, allowing you to profit from your data
- Constantly updated to stay ahead of rapid tech advancements, eliminating the need for users to "catch up"
- Multiple model sizes (3B, 8B, 70B parameters) to suit your speed and precision needs
- Train Rosetta Stone to normalize data to your own schema securely, without exposing your data to Narrative
Narrative, the industry's leading data collaboration and commerce platform, today announced a groundbreaking suite of enhancements aimed at democratizing AI development and data utilization. Chief among these is the introduction of a new Large Language Model (LLM) fine-tuning capability, enabling companies of all sizes to customize their own AI models directly on the Narrative platform with unprecedented ease. Additionally, Narrative is thrilled to introduce Rosetta Stone™ 2.0, the next generation of the company's acclaimed data normalization solution, engineered using the very fine-tuning tools now available to customers.
A Collaborative and Accessible Approach to LLM Fine-Tuning
At the core of Narrative's approach is a "collaboration-first" philosophy. Users can now access a broad range of datasets—from proprietary and partner sources to publicly available and marketplace content—to train and refine their LLMs. This approach fosters an ecosystem where content creators and publishers can monetize their work directly, while businesses benefit from a richer, more diverse data pool to power increasingly sophisticated AI models. By removing technical barriers and simplifying the model-building process, Narrative empowers everyone from non-technical operators to seasoned data scientists to craft bespoke language models with a simple point-and-click interface.
Narrative Unveils LLM Fine-Tuning Platform & Rosetta Stone 2.0, Revolutionizing Data Normalization & AI Customization
Rosetta Stone 2.0: Next-Generation Data Normalization
A centerpiece of today's announcement is Rosetta Stone 2.0, an evolution of Narrative's pioneering data normalization capability. Leveraging Narrative's new LLM fine-tuning platform, the updated Rosetta Stone model delivers remarkable performance gains and expanded functionality. It not only standardizes data automatically across disparate sources, ensuring seamless compatibility and readiness for training, but it also can serve as a foundational base model for customers looking to extend its core normalization capabilities into their specific domain. From ensuring coherent data formats to tackling complex, domain-specific semantic challenges, Rosetta Stone 2.0 is a flexible, next-level tool designed to accelerate data-driven innovation.
Key Features and Benefits:
Easy, No-Code Model Fine-Tuning:
Users can skip the complex coding, configuration files, and intricate infrastructure setups. Narrative's platform translates raw datasets into meaningful training material through an intuitive, point-and-click interface.
Rich Data Ecosystem & Monetization Opportunities:
Through Narrative's marketplace, publishers, content creators, and data owners can directly profit by offering their datasets for model training. Simultaneously, developers can tap into a vast reservoir of high-quality information to train models that align perfectly with their use cases.
Rosetta Stone 2.0 Engineered with Fine-Tuning:
Built using the same LLM customization features now offered to users, Rosetta Stone 2.0 exemplifies the power and potential of the Narrative platform. Its advanced normalization techniques handle complex and heterogeneous data sets, and it can be adapted into a specialized normalization model for industry- or business-specific contexts.
Bring fine tuning to your data
Narrative fine tuning is available anywhere Narrative is available, including in Narrative's cloud and in your organization's Snowflake, Databricks, AWS, Azure, or GCP account.
Customizing Rosetta Stone for Your Data
Narrative now gives you the option to tailor Rosetta Stone's powerful data normalization capabilities so it fits your organization's unique data and terminology—no major system overhauls required. This means you get more accurate and consistent results by aligning Rosetta Stone with your own industry language and internal structures.
When you're ready to deploy Rosetta Stone, you can choose from different model sizes to strike the right balance of speed, detail, and cost. Simply pick the option that best fits your team's priorities and infrastructure.
"The launch of our LLM fine-tuning platform and Rosetta Stone 2.0 marks a pivotal milestone in our journey to democratize AI development. With these offerings, anyone can create, refine, and extend powerful language models, and content creators can finally realize tangible value for their contributions. This is what the future of data and AI collaboration looks like—accessible, flexible, and mutually beneficial for all stakeholders." -Nick Jordan, Founder, Narrative
For more information on Narrative's LLM fine-tuning platform and Rosetta Stone 2.0, or to schedule a live demo, visit narrative.io.
EV tractor leader Monarch launches Monarch Tractor Europe N.V.
Official launch of new legal entity positions Monarch Tractor for sales in the European market.
ALSO READ:
Electric Robot Tractor Startup Monarch Eyes Global Expansion With $133 Million Fundraise - Forbes
Monarch Announces Establishment of Monarch Tractor Europe N.V - Monarch Tractor
Building on the successful introduction of its MK-V electric tractor and having established itself as the leading electric, autonomous, and data-driven off-road manufacturer in the specialty and compact tractor market, Monarch Tractor is pleased to announce the establishment of Monarch Tractor Europe, N.V.
Poised for growth across Europe.
"This milestone reflects our long-term engagement to serve the European market with a portfolio of solutions."
Recognized by Forbes in 2023 as one of the top 25 U.S. startups it believes will achieve a $1 billion valuation, Monarch has continued expanding its U.S. sales and dealer network across the U.S., into Canada, and now Europe. In addition to late-stage beta testing fully autonomous operations across several ag markets, Monarch is developing more opportunities for OE licensing collaborations and cementing itself as an energy and data platform.
"With Monarch's ongoing strategic growth journey, the expansion to Europe is a logical next step with planned local manufacturing, an R&D hub, a digital center of excellence, and its EU headquarters in Flanders, Belgium," says Monarch Tractor CEO and co-founder, Praveen Penmetsa.
Monarch Tractor Europe, N.V. is based in Antwerp, Belgium and serves as the business entity for all European operations. The European team is headed by Stéphane Jacobs who, in the role of Managing Director of Europe, will oversee a leadership team responsible for developing and deploying European market entry, growth, and strategy.
"This milestone reflects our long-term engagement to serve the European market with a portfolio of solutions fulfilling the requirements of European farmers," Jacobs says. "Enthusiastic feedback from visitors from our booth at EIMA [Bologna, Italy] and Vinitech-Sifel [Bordeaux, France] confirms that we are on a promising path."
By the end of the 2024 growing season, Monarch's MK-Vs have amassed nearly 67,000 hours of usage and abated more than 2,000 tonnes of CO2e.
About Monarch Tractor
Monarch Tractor's MK-V is the world's first fully electric, driver-optional, smart tractor that combines electrification, automation, machine learning, and data analysis to enhance farmer's existing operations, cut overhead costs, reduce emissions and increase labor productivity and safety. Monarch Tractor is committed to elevating farming and land management practices to enable clean, efficient, and economically viable solutions for today and the generations to come. Operating in more than five continents, Monarch Tractor serves vineyards, orchards, blueberries, dairy, and land management sectors. With cutting-edge technology and a global presence, Monarch is delivering meaningful change for the future of farming. For more information on Monarch Tractor, visit www.monarchtractor.com.
Monarch Tractor Media Contact:
Pearl Driver, pdriver@monarchtractor.com
Introducing the Lambda Inference API, AI without the complexity
Introducing the Lambda Inference API: Lowest-Cost Inference Anywhere
Lambda Cloud announced the GA release of the Lambda Inference API, the lowest-cost inference anywhere. For just a fraction of a cent, you can access the latest LLMs through a serverless API. Lambda Inference API offers low-cost, scalable AI inference with some of the latest models, such as the recently released Llama 3.3 70B Instruct (FP8), at just $0.20 per million input and output tokens. That’s the lowest-priced serverless AI inference available anywhere at less than half the cost of most competitors.
Choose from “Core” models, which are selected for stability and long-term support, or “Sandbox” models provide access to the latest innovations with more frequent updates. The API scales effortlessly to handle workloads of any size and integrates seamlessly with OpenAI-style endpoints, making implementation quick and easy.
AI without the complexity
Inference is where trained models prove their worth. It’s where the AI model takes in new data (aka prompts)—text, images, and embeddings—and generates actionable predictions, insights, or even videos of fire-fighting kittens in near real-time.
From powering conversational agents to generating images, inference is at the heart of every AI-driven application.
But let’s face it: deploying AI at scale is no easy feat. It requires massive amounts of compute, significant expertise in MLOps to set everything up and performance tune it, as well as a hefty budget to keep it all running smoothly. If you’ve ever tried deploying an AI model before, you know.
They built the Lambda Inference API, to make it simple, scalable, and accessible. For over a decade, Lambda has been engineering every layer of our stack– hardware, networking, and software, for AI performance and efficiency.
They’ve taken everything we’ve learned since then and built an Inference API, underpinned by an industry-leading inference stack, that’s purpose-built for AI.
- ALSO READ: Lambda Launches Grant for AI Researchers, Expands Research Program
GPU cloud company to award cloud credits to hundreds of researchers, powering the most GPU-dependent groundbreaking AI research - Lambda continues company momentum as it aims to double the size of its research team in the next year, bolsters Lambda-led research output, and opens a new San Francisco Office
Original Article & insights from Venture Beat
Lambda is a 12-year-old San Francisco company best known for offering graphics processing units (GPUs) on demand as a service to machine learning researchers and AI model builders and trainers.
But today it’s taking its offerings a step further with the launch of the Lambda Inference API (application programming interface), which it claims to be the lowest-cost service of its kind on the market. The API allows enterprises to deploy AI models and applications into production for end users without worrying about procuring or maintaining compute.
The launch complements Lambda’s existing focus on providing GPU clusters for training and fine-tuning machine learning models.
“Our platform is fully verticalized, meaning we can pass dramatic cost savings to end users compared to other providers like OpenAI,” said Robert Brooks, Lambda’s vice president of revenue, in a video call interview with VentureBeat. “Plus, there are no rate limits inhibiting scaling, and you don’t have to talk to a salesperson to get started.”
Developers can head over to Lambda’s new Inference API webpage, generate an API key, and get started in less than five minutes.
Lambda’s Inference API supports leading-edge models such as Meta’s Llama 3.3 and 3.1, Nous’s Hermes-3, and Alibaba’s Qwen 2.5, making it one of the most accessible options for the machine learning community. The full list is available here and includes:
- deepseek-coder-v2-lite-instruct
- dracarys2-72b-instruct
- hermes3-405b
- hermes3-405b-fp8-128k
- hermes3-70b
- hermes3-8b
- lfm-40b
- llama3.1-405b-instruct-fp8
- llama3.1-70b-instruct-fp8
- llama3.1-8b-instruct
- llama3.2-3b-instruct
- llama3.1-nemotron-70b-instruct
- llama3.3-70b
Pricing begins at $0.02 per million tokens for smaller models like Llama-3.2-3B-Instruct, and scales up to $0.90 per million tokens for larger, state-of-the-art models such as Llama 3.1-405B-Instruct.
As Lambda cofounder and CEO Stephen Balaban put it recently on X, “Stop wasting money and start using Lambda for LLM Inference.” Balaban published a graph showing its per-token cost for serving up AI models through inference compared to rivals in the space.

Furthermore, unlike many other services, Lambda’s pay-as-you-go model ensures customers pay only for the tokens they use, eliminating the need for subscriptions or rate-limited plans.
Closing the AI loop
Lambda has a decade-plus history of supporting AI advancements with its GPU-based infrastructure.
From its hardware solutions to its training and fine-tuning capabilities, the company has built a reputation as a reliable partner for enterprises, research institutions, and startups.
“Understand that Lambda has been deploying GPUs for well over a decade to our user base, and so we’re sitting on literally tens of thousands of Nvidia GPUs, and some of them can be from older life cycles and newer life cycles, allowing us to still get maximum utility out of those AI chips for the wider ML community, at reduced costs as well,” Brooks explained. “With the launch of Lambda Inference, we’re closing the loop on the full-stack AI development lifecycle. The new API formalizes what many engineers had already been doing on Lambda’s platform — using it for inference — but now with a dedicated service that simplifies deployment.”
Brooks noted that its deep reservoir of GPU resources is one of Lambda’s distinguishing features, reiterating that “Lambda has deployed tens of thousands of GPUs over the past decade, allowing us to offer cost-effective solutions and maximum utility for both older and newer AI chips.”
This GPU advantage enables the platform to support scaling to trillions of tokens monthly, providing flexibility for developers and enterprises alike.
Open and flexible
Lambda is positioning itself as a flexible alternative to cloud giants by offering unrestricted access to high-performance inference.
“We want to give the machine learning community unrestricted access to inference APIs. You can plug and play, read the docs, and scale rapidly to trillions of tokens,” Brooks explained.
The API supports a range of open-source and proprietary models, including popular instruction-tuned Llama models.
The company has also hinted at expanding to multimodal applications, including video and image generation, in the near future.
“Initially, we’re focused on text-based LLMs, but soon we’ll expand to multimodal models,” Brooks said.
Serving devs and enterprises with privacy and security
The Lambda Inference API targets a wide range of users, from startups to large enterprises, in media, entertainment, and software development.
These industries are increasingly adopting AI to power applications like text summarization, code generation, and generative content creation.
“There’s no retention or sharing of user data on our platform. We act as a conduit for serving data to end users, ensuring privacy,” Brooks emphasized, reinforcing Lambda’s commitment to security and user control.
As AI adoption continues to rise, Lambda’s new service is poised to attract attention from businesses seeking cost-effective solutions for deploying and maintaining AI models. By eliminating common barriers such as rate limits and high operating costs, Lambda hopes to empower more organizations to harness AI’s potential.
Introduce Datalogz BI Ops, a new approach to governance for the Business Intelligence consumption layer
Business intelligence (BI) has revolutionized the way organizations interact with their data, making advanced analytics accessible to non-technical users and empowering decision-makers with actionable insights. With tools like Microsoft Power BI, Tableau, Looker, and Qlik, BI has become the critical interface between enterprise data and business users. However, as BI adoption accelerates, a critical gap has emerged: governance at the consumption layer.
“BI Ops ensures that business intelligence remains a trusted, secure, and efficient driver of decision-making.”
BI sits at the intersection of data and decision-making. It’s where insights are generated, dashboards are built, and reports are delivered. Yet the rapid proliferation of BI tools and reports—coupled with growing data volumes—has introduced new challenges. Organizations are grappling with fragmented standards, duplicate datasets, conflicting reports, and potential security risks.
According to McKinsey, BI and reporting now account for 5-10% of total IT spend. Despite investments in data governance at the warehouse level, many organizations are finding that these efforts do not extend to the consumption layer where BI operates. This gap has created inefficiencies, eroded trust in data, and exposed organizations to security vulnerabilities.
Introducing BI Ops: Governance at the BI Layer
Datalogz is proud to introduce BI Ops, a new approach to governance designed specifically for the consumption layer. By focusing on BI reporting, datasets, user behavior, platform administration, and resource consumption, BI Ops bridges the gap between data governance and BI operations.
“BI is where the majority of employees interact with data directly. Without proper governance, the benefits of democratized data can quickly become liabilities,” said Logan Havern, CEO at Datalogz. “BI Ops ensures that business intelligence remains a trusted, secure, and efficient driver of decision-making.”
The Three Pillars of BI Ops Governance
- Source of Truth: BI Ops ensures data is accessible through reliable, verified sources, reducing redundancy and improving clarity.
- Trust: By promoting consistency and accuracy, BI Ops builds confidence in BI reports and dashboards.
- Security: BI Ops provides robust oversight of data access and sharing, safeguarding sensitive information.
Datalogz Control Tower: Enabling BI Ops
At the core of Datalogz’s BI Ops solution is the Datalogz Control Tower, a platform designed to bring visibility, monitoring, and security to the consumption layer.
Key features include:
- Unified Metadata Extraction: Provides visibility into BI environments across multiple platforms.
- Monitoring and Alerts: Tracks asset creation, engagement, and failures while identifying discrepancies, redundancies, and unverified datasets.
- Security Oversight: Monitors user behavior, data exports, and administrative changes to mitigate risks.
By addressing these areas, the Datalogz Control Tower empowers organizations to optimize BI usage, eliminate inefficiencies, and enhance security across their analytics stack.
The Future of BI Governance: BI Ops
The rise of BI underscores the need for governance that extends beyond traditional data warehouses. BI Ops is that solution, ensuring that the consumption layer operates as a well-governed, efficient, and secure extension of an organization’s data strategy.
To learn more about BI Ops and the Datalogz Control Tower, schedule a demo at https://www.datalogz.io/book
About Datalogz
Datalogz is a fast-growing, culture-focused, venture-backed startup dedicated to building products that re-imagine an organization's Business Intelligence environments. Datalogz is creating the future of BI Ops and is on a mission to end BI and analytics sprawl. The team comprises elite data technology entrepreneurs and analytics leaders and is always looking to bring on talent that aligns with its vision, mission, and values.
Contacts
Tina Bhatia
Datalogz
+1-315-216-2203
Tina@datalogz.io
Argentine fintech Ualá raises $300M in Series E funding at 2.75B valuation
Ualá Eyes Banking Expansion After $2.75 Billion Valuation
Argentine fintech company Ualá has raised $300 million from investors at a valuation of $2.75 billion, the latest funding haul for one of the most valuable startups in Latin America. The VC arm of insurance giant Allianz has led a $300 million Series E funding round for Ualá. Stone Ridge Holdings Group, Tencent, Pershing Square Foundation, Ribbit Capital, Goldman Sachs Asset Management, Soros Fund Management, Rodina, SoftBank Latin America Fund, Jefferies, D1 Capital Partners, Claure Group, AlleyCorp and Monashees all joined the round.
The deal represents the largest growth equity round in Latin America in recent years and marks Allianz X’s first investment in the region. Ualá plans to utilise the capital to enhance its services across Latin America, particularly in Argentina, Mexico, and Colombia.
“We’re going to use this to scale Argentina, where my goal is to be the largest bank in the country, not just by users but by book,” founder and CEO Pierpaolo Barbieri said in an interview with Bloomberg News posted Monday (Nov. 11).
He added that the company has not ruled out growth in other markets or acquisitions. The funding will also help with “the expansion of business units in both Mexico and in Colombia,” said Barbieri.
Ualá reached a valuation of $2.45 billion in 2021 following a $350 million injection of private funding. The chief executive told Bloomberg the new valuation “shows confidence” in Ualá’s potential, adding, “We’re very proud to be able to show leadership in the region.”
The company will try to reach profitability in all markets before considering an initial public offering in the U.S., he added.
Barbieri said the increased valuation in the latest round “shows confidence” in Ualá’s potential, adding, “We’re very proud to be able to show leadership in the region.” The company will seek to be profitable in all markets before eyeing an initial public offering in the US, he said.
The funding round comes amid a challenging environment for Latin American startups, with investors staying on the sidelines as interest rates in the US remain high relative recent years. The region saw limited VC dealmaking in the third quarter and relatively few recent acquisitions or new public offerings, according to data from PitchBook.
Regional Reach
Ualá, the largest startup in Argentina, has 8 million users. About 6 million of those are located in its home country, a user base that represents more than 17% of Argentina’s adult population. Launched in 2017 with a debit card, the company now offers a series of products including payments, credit, merchant acquiring and investments. Most recently, demand has been “crazy” for dollar-denominated accounts, Barbieri said, with 100,000 account openings in the first five days of the offering, even though only half a million users were given the option.
Argentina’s President Javier Milei attended the announcement at the company’s newly-inaugurated headquarters in the Palermo neighborhood of Buenos Aires. Of Ualá’s 1,500 employees, about 1,000 are located in Argentina.
The company is looking into Milei’s investment incentive plan, known as RIGI, and Barbieri said he expects the company will qualify for the incentives, but declined to give details.
Barbieri also highlighted the potential of Mexico’s market, where the company operates with a banking license since it received regulator approval in 2023, and where it expects to eventually do more business than it does in Argentina. Barbieri said that Ualá would continue to provide a high yield on savings accounts, a tool that’s become a popular way for fintechs to lure savers in Mexico.
“What has been difficult is to underwrite credit in Mexico to people that don’t have a credit history,” he said. “In our case, we’re debit first and we use that data, plus cellphone data, bill-payment data, remittance data and merchant acquiring data to build our own score and underwrite credit.”
The company will also look to expand its offerings in Colombia, where it has about 700,000 users, but is still adding services. “The rate of growth there has been lower because the product suite is not yet complete,” Barbieri said.
The partnership with Allianz also will open space for Ualá to unlock opportunities in insurtech, according to a press release.
“Our investment in Ualá offers both financial and strategic partnership opportunities, which can enable us jointly to penetrate new customer segments, capitalize on growth opportunities, and enhance customer experience,” Allianz X Chief Executive Officer Nazim Cetin said.
(Corrects second paragraph to reflect investment from Pershing Square Foundation, not Pershing Square Holdings.)
Most Read from Bloomberg Businessweek
Lambda Announces Strategic Partnership with SK Telecom to Expand Cloud Services in South Korea
GPU cloud company expands international presence, furthering mission to build the world’s largest on-demand GPU cloud for AI
READ MORE:
Lambda Launches First Self-Serve, On-Demand NVIDIA HGX H100 and NVIDIA Quantum-2 InfiniBand Clusters for AI Model Training
New Lambda 1-Click Clusters provide AI engineers and researchers immediate, easy access to NVIDIA H100 Tensor Core GPUs on multiple nodes for large-scale training, without requiring long-term contracts
Lambda, the AI developer cloud, today announced a partnership with South Korea's largest mobile telecommunications company, SK Telecom (SKT). Amid increasing demand for AI infrastructure in South Korea, the partnership with SK Telecom represents a key step in Lambda’s vision to bring its AI Cloud to engineers across the world.
“Given the rapid pace of AI innovation happening in South Korea, we’re excited to partner with SKT in their mission to transform their company and country into a global AI powerhouse.”
“SKT shares in our vision to make GPU compute as ubiquitous as electricity,” said Lambda CEO and co-founder, Stephen Balaban. “Given the rapid pace of AI innovation happening in South Korea, we’re excited to partner with SKT in their mission to transform their company and country into a global AI powerhouse.”
As part of the agreement, Lambda and SK Telecom will deploy Lambda’s AI Cloud platform into SK Broadband’s Gasan data center later this year, enabling SK Telecom to introduce a South Korea based GPU cloud service. This deployment will enable SK Telecom to bring local AI cloud services to enterprises, startups, and research labs in South Korea.
“Through our strategic partnership with Lambda, we are able to bolster SK Telecom’s leadership in AI services and capabilities while unlocking new business opportunities,” said Kim Kyeong-deog, Vice President and Head of Enterprise Business Division at SK Telecom.
About Lambda
Lambda was founded in 2012 by AI engineers with published research at the top machine learning conferences in the world. Our GPU cloud and on-prem hardware enables AI engineers to easily, securely and affordably build, test and deploy AI products at scale. Lambda’s mission is to accelerate human progress with ubiquitous and affordable access to computation. One person, one GPU.
Contacts
Brittany Catucci
pr@lambdal.com