Tag: startup funding india

  • NowPurchase Funding Lands ₹80 Cr for Scrap, AI

    NowPurchase Funding Lands ₹80 Cr for Scrap, AI

    NowPurchase is a Kolkata-based manufacturing materials marketplace that helps metal factories source scrap, alloys, and additives more efficiently. For a lot of foundries, raw-material buying is still opaque, fragmented, and painfully manual. That mess shows up on the shopfloor fast. The new NowPurchase funding round brings in ₹80 Cr, led by Bajaj Finserv, to push deeper into scrap recycling, branded products, and its AI-led SaaS platform MetalCloud. Founded in 2017 by Naman Shah and Aakash Shah, the company is trying to own both sides of the workflow: what factories buy and how they melt it.

    What is NowPurchase and how does MetalCloud work?

    At the front end, NowPurchase works like a specialized procurement layer for metal manufacturers. Buyers can source raw materials such as metal scrap, alloys, and additives through the platform. The experience is tied to a WhatsApp bot that handles real-time price and stock discovery. That matters because most factories in this category still don’t want another bloated enterprise system. They want quick visibility, faster quotes, and someone who can actually support the order on the ground.

    MetalCloud is the software layer sitting inside production. Its core job is to help foundries decide the right charge mix based on inventory, market prices, and available supply, then turn that plan into something usable on the factory floor. The platform captures data through kiosks, IoT hooks, and software inputs. It pushes live production information to computers and WhatsApp, including heat data, sample chemistry, raw-material consumption, breakdown logs, and power use. It also generates a digital melting log sheet and dashboard views for furnace utilization, idle time, liquid metal tap, and specific power consumption.

    One of MetalCloud’s more practical modules is the Suggest engine. It gives addition and dilution recommendations during melting and sends spectrometer readings to WhatsApp. It’s designed to reduce the tiny chemistry errors that quietly wreck margins in a foundry. NowPurchase says this module can reach up to 98% accuracy in those recommendations. That’s the kind of detail operators actually care about.

    The product is getting broader, too. A newer Defect Bot AI can analyze casting defect images in about 30 seconds. It returns confidence-scored diagnoses, ranks likely root causes, and suggests corrective actions. Put simply, the software is moving from procurement support into a fuller operating stack for foundries: melt planning and shopfloor monitoring. Quality control, too.

    Who founded NowPurchase and what has it built?

    How the company started

    NowPurchase didn’t begin as a narrow foundry-tech company. Naman Shah started it after seeing how little industrial buying had changed compared with consumer commerce, and the early thesis was broader B2B procurement. His cousin Aakash Shah joined as co-founder, and the two validated the idea by visiting factories across Delhi-NCR, Kolkata, and Mumbai before building the company out.

    The sharper version of the business came later. By December 2019, NowPurchase had pivoted toward the metal manufacturing market after the founders decided a horizontal model wouldn’t make them important enough to customers. That move gave the company a clearer customer and a more urgent workflow. It also gave it a better shot at becoming part of the supply chain rather than just another seller.

    Why Naman Shah and Aakash Shah fit this market

    Naman brought startup experience from the US and Singapore, including a stint leading BizEquity’s expansion in Asia. That doesn’t make someone a metallurgist overnight, of course. But it does explain why the company has always leaned hard into software, process design, and category specialization rather than just brokering materials.

    Aakash’s fit is more operational. He came in with exposure to mechanical engineering, rural marketing, and B2B consultative selling, and the family’s manufacturing business gave both founders a close-up view of how procurement headaches pile up inside factories. That mix matters. Software ambition on one side, industrial reality on the other.

    What NowPurchase has executed so far

    This isn’t a beta-stage story. NowPurchase has delivered more than 1.95 lakh tonnes of raw materials to over 200 clients, and it currently operates 6 warehouses and 2 scrap processing centers. After its 2024 raise, the company expanded in Maharashtra with a scrap recycling unit near Pune. It also started micro-centres in Punjab, Gujarat, and Tamil Nadu. By September 2024, Naman Shah said MetalCloud was already being used by more than 100 factories across India.

    Its tech bench also got stronger when Ankan Adhikari joined as CTO in January 2021. He had previously founded Pyoopil Education and sold it to upGrad in 2016. That helps explain why NowPurchase’s software side looks more deliberate than what you usually see from industrial marketplaces.

    The ₹80 Cr round and how NowPurchase stacks up

    Here’s the deal. The latest NowPurchase funding round is ₹80 Cr, or about $8.5 Mn, led by Bajaj Finserv. Existing backers InfoEdge Ventures, Orios Venture Partners, and Real Ispat Group also joined, along with investors and family offices including S Four Capital partner Shikhar Raj and Lloyds Group promoter-director Madhur Gupta. The company has now secured ₹120 Cr in equity overall, and this comes after a $6 Mn mix of equity and debt in 2024 led by InfoEdge Ventures.

    Competition is real, but it’s split. Metalbook is a digital metal marketplace with financing and logistics built around supply-chain transactions. ScrapEco focuses on digital scrap buying and selling. Then there are the old-school alternatives: local traders, brokers, dozens of phone calls, manual quote comparisons, and plant teams running procurement from WhatsApp threads and spreadsheets.

    NowPurchase isn’t just a commerce layer, and it isn’t just software. It combines physical infrastructure and on-ground service. It also has scrap processing, branded materials, and a foundry-focused SaaS stack that reaches into melt execution and defect analysis. That hybrid model is harder to scale. It’s also probably why investors are still writing checks.

    Why does NowPurchase funding matter now?

    Because this round changes the company’s shape more than its headline.

    The money is earmarked for 3 concrete things: strengthening scrap recycling, expanding the branded products portfolio, and scaling MetalCloud. That means NowPurchase isn’t using fresh capital just to sell more material. It’s trying to deepen control over supply quality, margin structure, and customer stickiness at the same time.

    There’s also a geographic signal in the plan. Naman Shah has called Tamil Nadu “the most promising market” for the company right now, and NowPurchase plans to add another scrap processing center there while also setting up end-to-end marketplace operations. It’s also looking to open 2 more facilities in Jharkhand and Tamil Nadu in the next 3 to 6 months. If that build-out lands on schedule, the company gets closer to being a regional operating network rather than a single procurement brand.

    Bajaj Finserv leading the round matters, too. Not because a finance brand automatically makes a startup better. But because it suggests institutional belief in a category that sits awkwardly between industrial commerce, recycling infrastructure, and vertical SaaS.

    How big is the market NowPurchase is chasing?

    Big enough to attract a lot of attention, and messy enough that specialists still have room.

    One way to look at it is through the foundry side. Mordor Intelligence pegs India’s foundry market at $28.72 billion in 2026 and projects it to reach $46.72 billion by 2031. On the upstream side, India produced 54.19 MT of crude steel in FY26 between April and July 2025, and the country is expected to reach about 330 MT of steelmaking capacity by 2030. That’s a huge industrial base.

    The scrap story is just as important. EY says India consumed about 34.2 million tons of ferrous scrap in 2024, while scrap utilization in crude steel production was only around 23%, below global norms. It also notes that Maharashtra, Punjab, and Tamil Nadu accounted for roughly 35% of India’s scrap consumption in scrap-based steelmaking that year. So the timing makes sense: more scrap demand, more pressure on efficiency, and plenty of room to formalize collection, processing, and plant-level decision-making.

    What should you watch after this NowPurchase funding round?

    What matters next is whether the company can turn this ₹80 Cr into tighter recycling capacity, a stronger branded-materials business, and a MetalCloud product that becomes part of daily plant operations instead of a nice-to-have dashboard. Watch Tamil Nadu. Watch the new centers in Jharkhand and Tamil Nadu. Watch whether NowPurchase can keep proving that a metal marketplace can also become factory software.

    Read how Xovian raised $2M to advance RF satellite systems for reliable and scalable space networks.

    FAQ

    What is the latest NowPurchase funding round? 

    NowPurchase has raised ₹80 Cr, or about $8.5 Mn, in a round led by Bajaj Finserv. Existing investors including InfoEdge Ventures, Orios Venture Partners, and Real Ispat Group also participated, along with names such as Shikhar Raj and Madhur Gupta.

    How does MetalCloud work for foundries?

    MetalCloud is an AI-led factory software stack for metal manufacturers. It helps teams choose charge mixes and captures production and heat data through kiosks and software inputs. It sends updates over WhatsApp and now extends into defect diagnosis and melt execution support.

    Who founded NowPurchase? 

    NowPurchase was founded in 2017 by Naman Shah and Aakash Shah. Naman had earlier startup experience in the US and Singapore, including BizEquity’s Asia expansion, while Aakash brought operating exposure in mechanical engineering, rural marketing, and B2B selling.

    Is NowPurchase a SaaS company or a metal marketplace? 

    It’s both. NowPurchase sells and processes industrial raw materials through its marketplace and physical network, while MetalCloud handles production intelligence inside the plant, which puts the company in the overlap between vertical SaaS, industrial procurement, and scrap recycling.

  • Xovian Raises $2M to Build RF Satellites

    Xovian Raises $2M to Build RF Satellites

    Xovian is a Bengaluru startup building RF satellites that turn radio signals from space into real-time intelligence. It has now raised $2 million in fresh funding led by Ashish Kacholia, with existing investor Inflection Point Ventures joining in. The pitch is simple: when ships or aircraft go dark, optical satellite imagery often can’t keep up, and that blind spot matters for defence, logistics, aviation, and maritime operations. Founded in 2019 by Ankit Bhateja and Raghav Sharma, the new capital will go into satellite development. It’ll also fund deeper AI and engineering hires, along with commercial partnerships.

    What does Xovian’s RF satellite platform do?

    Xovian’s product is a full-stack RF intelligence system. Its satellites are built to capture radio frequency emissions across the spectrum. Its AI layer interprets those signals. The output becomes a decision layer that mixes signal intelligence with geospatial context for customers tracking assets or monitoring activity across land, sea, and air.

    That’s the core idea.

    The workflow is more specific than the usual “AI plus space” line. Xovian is validating a multi-frequency RF payload first, then moving toward a nanosatellite deployment. Once in orbit, the system is designed to continuously scan Earth’s radio spectrum. It detects shifts in intent, exposure, or volatility, then pushes low-latency insights from spacecraft to cloud software. Customers don’t have to stitch together separate hardware, software, and analytics vendors on their own.

    That’s the operational difference. Bhateja said older decision chains can take 4 to 4.5 hours because teams are waiting on imagery. They’re cross-checking signals and manually interpreting movement. Xovian’s pitch is that RF-first monitoring can shrink that to under 10 minutes. For a customer watching a vessel, aircraft, or sensitive corridor, that’s the whole product.

    It also removes some of the clunky parts of legacy intelligence work. Instead of relying only on what a camera can see, Xovian’s architecture is built to listen for activity and classify it. It then delivers contextual alerts for sectors ranging from maritime and aviation to defence and climate monitoring. Vertical integration — hardware, payload, sensing, AI, and delivery in one stack — keeps the latency low enough to matter.

    Who built this RF satellite startup and why?

    The founding story

    Xovian was founded in 2019 by Ankit Bhateja and Raghav Sharma, though the company’s incorporation dates back to October 25, 2018. It was built around a sharp thesis: optical satellites miss too much of the world’s live activity, so intelligence systems need to understand radio behavior in real time, not just images after the fact. That’s the gap the founders chose to chase.

    Why the founders have real market fit

    Bhateja didn’t arrive at this through a generic software route. Before Xovian’s current satellite push, he was already speaking publicly about an indigenously developed passive-radar approach for maritime and environmental monitoring, and he said he had support from ISRO on earlier projects. Sharma brings a different angle. He’s a chemical engineering graduate from NIT Jalandhar who, after a stint at Escorts, moved into building Xovian with a focus on satellite manufacturing and services.

    Early execution and technical signals

    The company’s earlier work wasn’t limited to slide decks. Sharma’s SGAC speaker profile ties Xovian to amateur rocket testing and CanSat programs. It also links the company to a PES University collaboration around satellite development and payloads for drought, glacier, and biomass monitoring. That doesn’t make the current RF satellite program de-risked. Space hardware never is. But it does show the founders have been building in this domain for years, not just since deeptech became fashionable.

    Product status and traction

    Right now, Xovian is still in the build-and-validate phase. That’s exactly where you’d expect an early hardtech company to be. It’s preparing its first AI-native RF satellite, planned payload validation on an ISRO launch vehicle, and early customer pilots and data trials in 2026. The company lists itself in Bengaluru with an employee band of 11-50. It’s still a compact, engineering-heavy team.

    Fundraising details

    This new round brings in $2 million, led by Ashish Kacholia, with Inflection Point Ventures participating again, and takes Xovian’s disclosed funding to $4.5 million. Before this, the startup raised $2.5 million in August 2025 from Piper Serica, Turbostart, IPV, and Eaglewings Ventures. The fresh capital is earmarked for satellite development. It’ll also go toward stronger engineering and AI teams, plus commercial tie-ups that can turn payload capability into paying use cases.

    How Xovian compares with rivals

    Xovian doesn’t sit neatly beside India’s better-known spacetech names. Pixxel is identified far more with Earth imaging and hyperspectral data. SatSure is a downstream decision-intelligence and Earth observation player. Dhruva Space is stronger on satellite platforms and mission infrastructure, while Bellatrix is about propulsion. Xovian’s wedge is narrower and more specialised: RF sensing for real-time situational awareness when optical methods are too slow, too limited, or simply blind.

    Legacy competition matters too. In practice, Xovian isn’t only competing with startups. It’s up against a patchwork of optical satellite feeds and ground-based monitoring. It also has to beat manual analyst workflows and delayed intelligence handoffs. Investors backing the company are betting that an RF-first, vertically integrated stack can produce faster, more usable intelligence than those fragmented systems.

    Why does Xovian’s new funding round matter?

    Because this isn’t a consumer app where more money just means more marketing.

    For Xovian, the new round matters because it helps bridge the hardest gap in any space startup: moving from a technical concept to hardware in orbit. That means payload development and satellite integration. It also means AI model refinement, plus the kind of engineering hiring that can’t be faked with flashy branding. If the company misses on execution, the thesis collapses fast. If it hits, it owns a much harder-to-copy layer of space intelligence.

    It matters for customers too. Maritime operators, aviation users, logistics networks, and defence-linked buyers don’t need another dashboard. They need better visibility when assets are moving, disappearing, or behaving strangely. Xovian’s use of funds suggests it’s trying to get from “interesting RF tech” to “commercially usable monitoring system.” That’s a much more serious milestone.

    The round also says something about investor appetite. Kacholia leading the round, with IPV participating again, signals belief in a deeptech model where defensibility comes from proprietary hardware plus intelligence software, not just one or the other. It’s a tougher build. It’s also why a company like this can still stand out in a crowded Indian startup market.

    Why are RF satellites and Indian spacetech attracting capital now?

    The market backdrop is doing some heavy lifting here. One widely cited estimate puts India’s space sector on a path from about $13 billion to $77 billion by 2030. A separate FICCI-EY projection, cited in 2025, pegs India’s space economy at $44 billion by 2033, up from roughly $8.4 billion in 2024, with the country targeting an 8% share of the global market. Those numbers aren’t identical. They point the same way: investors see a much bigger commercial space market forming in India than existed a few years ago.

    Policy has changed the timing. India opened the sector to private participation in June 2020, followed that with Indian Space Policy 2023, and loosened foreign investment rules in February 2024. Business Standard also noted roughly 250 startups are now operating across upstream and downstream space segments. That helps explain why new categories, including RF intelligence, are finally getting funded instead of being treated like science projects.

    There’s also a practical demand story here. The same FICCI-EY outlook sees Earth observation and remote sensing contributing about $8 billion by 2033, while satellite communication is projected to become the largest slice of the market at $14.8 billion. That matters because Xovian sits in the part of the stack where sensing, intelligence, defence relevance, and commercial monitoring start to overlap.

    Recent deal flow backs that up. Bellatrix Aerospace raised $20 million in late March 2026 to expand satellite propulsion manufacturing, and Dhruva Space was back in the market for a much larger round in February 2026. Investor interest in Indian spacetech is real. But the bar is rising too. Startups now need clear technical moats, not just patriotic pitch decks.

    Xovian’s bet is that RF satellites could become one of those moats.

    If the company can get its first satellite and customer pilots working on schedule, it won’t just be another Indian spacetech fundraising story. It’ll be a test of whether RF intelligence can become a durable commercial category.

    Read how Sycamore raised $65M to create an agent operating system for managing AI agents inside enterprise workflows.

    FAQ

    What funding did Xovian just raise?

    Xovian raised $2 million in a fresh round led by Ashish Kacholia, with existing backer Inflection Point Ventures also participating. The round takes the Bengaluru startup’s disclosed funding to $4.5 million and is meant to push satellite development, AI hiring, and commercial partnerships further.

    How do Xovian’s RF satellites work? 

    Xovian’s RF satellites are designed to detect and interpret radio frequency signals rather than relying only on optical imagery. The company combines space-based sensing and AI-led signal analysis. It also has a cloud delivery layer so customers can get real-time monitoring and situational awareness from RF activity across land, sea, and air.

    Who are the founders of Xovian?

    Xovian was founded by Ankit Bhateja and Raghav Sharma in 2019. Bhateja had already been working on passive-radar ideas tied to maritime and environmental use cases, while Sharma came from an engineering background at NIT Jalandhar and earlier industry experience before co-building the company.

    Is Xovian a spacetech company or a defence-tech company? 

    It’s best described as a spacetech company with strong defence-tech and intelligence applications. Its product sits at the intersection of satellite infrastructure, signal intelligence, geospatial analytics, and asset monitoring. That’s why it can sell into maritime, aviation, logistics, and defence-type use cases at the same time.

  • Sycamore Raises $65M for an Agent Operating System

    Sycamore Raises $65M for an Agent Operating System

    Sycamore builds an agent operating system that lets large companies create, govern, and run autonomous AI agents inside real enterprise workflows. On March 30, 2026, the Palo Alto startup closed a $65 million seed round led by Coatue and Lightspeed — a huge first round for a company launched by founder and CEO Sri Viswanath in late 2025 after leaving his full-time investing role at Coatue. The pitch is simple enough to understand and hard enough to execute: enterprises want AI agents to do real work, but most companies still don’t have a safe way to control those systems once they start touching production apps, data, and infrastructure.

    That’s why this deal stands out.

    A lot of AI startups are shipping wrappers, copilots, or narrow workflow bots. Sycamore is trying to own the layer underneath them — the control system that decides how agents are built and what they’re allowed to do. It also governs how they improve and who can audit the result. That’s a much bigger bet. Investors usually fund this kind of company early only when they think the founder has already seen a platform shift up close.

    What is Sycamore’s agent operating system and how does it work?

    Sycamore’s agent operating system is a full-lifecycle platform for enterprise AI: companies can discover use cases, build agents, deploy them, observe what they do, and evolve those systems over time. Users describe what they want in natural language, and Sycamore generates production-ready applications and integrations tailored to that company’s environment. It also builds agents, rather than forcing teams to stitch together a pile of separate tools.

    The most interesting part is the trust model. Sycamore says agents don’t just get full autonomy on day 1 — they move “from observation to action” as they prove reliability. Every operation is isolated and auditable. Governance is built in from the start, with roles, permissions, control planes, and traceability. For enterprise buyers, that matters a lot more than a flashy demo.

    The platform is also pitched as more than orchestration. Sycamore describes 4 core building blocks: a progressive trust system, adaptive system generation, continuous improvement, and collective intelligence. In plain English, that means the software is supposed to connect company data and workflows. It learns from outcomes and preserves institutional knowledge across deployments instead of treating each agent like a disposable one-off.

    That’s the before-and-after story here. Before, an enterprise team has to wire together models, permissions, logs, integrations, and human review by hand. After, Sycamore wants a single agent operating system to handle that work. It generates the system, watches it, and keeps tightening the loop as it learns.

    Who founded Sycamore and why are investors backing it?

    The founding story

    Sycamore was founded by Sri Viswanath, who launched the company after leaving his full-time role at Coatue in the fall of 2025. His argument is that AI agents are the next platform shift in enterprise computing: models can now reason and act, but companies still lack the infrastructure to deploy that autonomy safely. That’s basically the whole company thesis.

    Why Sri Viswanath fits this category

    This isn’t a first-time founder guessing his way through enterprise plumbing. Viswanath has spent more than 20 years building enterprise platforms, with stops at Sun Microsystems and VMware, then CTO roles at Groupon and Atlassian. At Atlassian, he led the company’s cloud transformation and said he scaled the engineering organization to more than 7,000 people. That’s exactly the kind of operating experience investors like to see when the product is all about control, reliability, and scale.

    There’s another reason the round came together fast. Viswanath told TechCrunch that “the round came together through long-standing relationships,” which makes sense given the cap table. Before starting Sycamore, he was a general partner at Coatue focused on AI and enterprise. He’d already spent years around the buyers, builders, and backers now crowding into this category.

    Early signals and the seed round

    Sycamore hasn’t named customers, but Viswanath said the company already has traction with large enterprise buyers. The team works directly with Fortune 100 companies, and the company describes a founding group that includes researchers from Stanford and Cornell plus engineers from Meta, Google, and Atlassian. That’s not the same thing as published revenue. It is a real signal that the startup is selling into serious accounts early.

    The round itself is stacked. Coatue and Lightspeed led the $65 million seed. Additional participation came from Abstract Ventures, Dell Technologies Capital, 8VC, Fellows Fund, and E14 Fund. The angel list is unusually heavyweight too: Bob McGrew, Lip-Bu Tan, Ali Ghodsi, Frederic Kerrest, Soham Majumdar, Mike Knoop, BJ Jenkins, Francois Chollet, Jerry Tworek, Jay Simons, and others all show up around the deal.

    How does Sycamore compare with other agent operating systems?

    Sycamore isn’t walking into an empty category. The source deal report names smaller startups like Maisa AI, bigger newly funded entrants like OpenAI-backed Isara with a reported $94 million raise, and growth-stage players like Airia and Port. Those two each announced $100 million rounds in late 2025. Then you’ve got platform giants trying to own the same control point — OpenAI with Frontier and Anthropic with Cowork. Microsoft Azure has Foundry, and AWS has Amazon Bedrock AgentCore.

    But the real competition isn’t only other startups. It’s also the messy status quo inside big companies: internal platform teams bolting together model APIs, access controls, observability tools, workflow software, and homegrown security review. That approach can work for a pilot. It gets painful fast when agents start crossing business functions or making decisions with real consequences.

    Sycamore’s differentiation is that it’s trying to sell the whole system, not a narrow add-on. Viswanath told TechCrunch most tools “layer agents on top” of existing workflows, while Sycamore starts with the problem and builds the right mix of agents and back-end systems. It also builds front ends and integrations from scratch. Pair that with the company’s progressive trust model and governance-heavy design, and you can see the investor bet: if enterprises really do move from assistants to autonomous operators, the control plane may be worth more than any single agent app.

    Why does this $65M seed round matter for Sycamore?

    Because $65 million is a giant seed for a company that’s still early. It changes what Sycamore can attempt.

    A smaller round would’ve pushed the company toward a tighter, maybe safer product. This one gives it room to chase the broader thesis — infrastructure, governance, research, and enterprise deployment all at once. That lines up with how Sycamore presents itself: frontier research that ships, a trust layer for autonomy, and direct work with large enterprises rather than a quick self-serve tool.

    It also says something about investor psychology right now. Coatue and Lightspeed aren’t backing Sycamore because enterprise agent demand is already fully proven. They’re backing it because they think the bottleneck is shifting from model quality to control, security, and orchestration. If that’s right, the valuable company won’t just be the one with the smartest model. It’ll be the one that helps enterprises trust autonomous systems enough to actually deploy them.

    And frankly, that’s a more defensible story than “we built another AI coworker.”

    How big is the market for an agent operating system?

    The numbers are why founders and VCs keep piling in. Grand View Research estimates the enterprise agentic AI market was worth about $2.58 billion in 2024, will reach $3.67 billion in 2025, and could climb to $24.5 billion by 2030, implying a 46.2% compound annual growth rate. North America held more than 39% of the market in 2024, which fits Sycamore’s focus on big U.S. enterprises.

    Gartner’s adoption forecasts are just as aggressive. It said in August 2025 that 40% of enterprise applications would feature task-specific AI agents by the end of 2026, up from less than 5% in 2025. Gartner has also said that by 2028, 33% of enterprise software applications will include agentic AI capabilities, up from less than 1% in 2024.

    But this isn’t a clean gold rush. Gartner also warned that more than 40% of agentic AI projects could be canceled by the end of 2027, largely because many efforts won’t show enough value or mature autonomy. That skepticism helps explain Sycamore’s pitch: if failure rates stay high, buyers will care even more about governance, auditability, and measured autonomy instead of raw demo magic.

    Conclusion

    Sycamore’s agent operating system pitch is ambitious, maybe uncomfortably so. That’s also why investors wrote such a big first check: they’re not funding a feature, they’re funding a bid to become the operating layer for enterprise AI agents. The next thing to watch is whether Sycamore can turn unnamed big-company traction into visible deployments before the giants swallow the category whole.

    Read how ScaleOps funding lands $130M for cloud efficiency to automate Kubernetes and AI infrastructure optimization in real time

    FAQ

    What funding did Sycamore raise?

    Sycamore raised a $65 million seed round announced on March 30, 2026. Coatue and Lightspeed led the deal. The investor list also included firms such as Dell Technologies Capital, 8VC, Fellows Fund, E14 Fund, and Abstract Ventures, plus angels like Bob McGrew, Lip-Bu Tan, and Ali Ghodsi.

    How does Sycamore’s agent operating system work?

    It’s built to let enterprises describe intent in natural language and then generate production-ready agents and apps around that goal. It also builds integrations. The system adds governance from the start — with isolation, audit logs, permissions, human oversight, and a “progressive trust” model where agents earn more autonomy over time instead of getting it automatically.

    Who is Sri Viswanath? 

    Sri Viswanath is Sycamore’s founder and CEO, and he previously worked as CTO at Atlassian and Groupon after earlier engineering roles at Sun Microsystems and VMware. He also spent time at Coatue as an investor focused on AI and enterprise companies, which helps explain both the company’s strategy and the strength of its early backers.

    Why is the agent operating system market attracting so much money? 

    Because enterprises are moving from AI assistants to AI systems that can actually take actions across apps and workflows, and that creates a new control problem. Market researchers and Gartner both expect fast adoption and sharp revenue growth in enterprise agentic AI over the next few years. That’s why investors are willing to fund platforms that promise orchestration, governance, and security — not just another chatbot front end.

  • ScaleOps Funding Lands $130M for Cloud Efficiency

    ScaleOps Funding Lands $130M for Cloud Efficiency

    ScaleOps builds software that automatically manages cloud and AI infrastructure in real time. Now the New York-headquartered startup has announced $130 million in a Series C led by Insight Partners, a bet that companies don’t need more compute nearly as often as they need to stop wasting the compute they already have. CEO Yodar Shafrir co-founded the company in 2022 after seeing the resource mess up close at Run:ai, where static Kubernetes settings kept colliding with dynamic production workloads.

    That pitch is landing because it’s painfully familiar. GPUs sit idle. Clusters get overprovisioned. DevOps teams burn hours tuning YAML and chasing incidents. They also have to beg other teams to approve infrastructure changes that should’ve been automatic by now.

    What is ScaleOps and how does it work?

    ScaleOps is an autonomous Kubernetes optimization platform that runs on top of existing infrastructure and continuously adjusts resources in real time. It can be installed using a simple Helm command and then starts observing workload behavior and cluster signals. Based on this, it makes context-aware decisions around CPU, memory, replicas, nodes, and GPUs without requiring teams to replace their existing autoscalers.

    The platform focuses on practical automation rather than just visibility. It automatically rightsizes pod requests and limits, detects workload types such as stateless services, Spark, Kafka, and batch jobs, and applies policies without manual configuration. It also handles pod healing and reacts to demand spikes, while working alongside tools like HPA and KEDA.

    On the cost side, ScaleOps improves infrastructure efficiency by consolidating underused nodes, optimizing pod placement, and increasing the use of spot instances with safe fallback options. For AI workloads, it introduces GPU-aware optimization, including dynamic GPU sharing and scaling based on real usage instead of averages.

    The impact is clear in day-to-day operations. Without ScaleOps, engineers spend time tuning configurations and reacting to issues. With it, infrastructure decisions happen automatically in production, helping teams reduce waste, improve performance, and manage cloud environments more efficiently.

    Who founded ScaleOps and what has the company done so far?

    The founding story

    Shafrir started ScaleOps after a pattern kept repeating during his time at Run:ai. Customers liked GPU orchestration, but production teams still struggled to run real workloads efficiently once inference and broader cloud infrastructure demands showed up. His view is blunt: “Kubernetes is a great system. It’s flexible and highly configurable. But that’s also the problem.” That line gets at the whole company thesis—too much of modern infrastructure still depends on static settings in systems that are anything but static.

    Why Yodar Shafrir had a head start

    Shafrir wasn’t coming in cold. Before founding ScaleOps in March 2022, he worked at Run:ai as a senior software engineer and then as software team lead for AI orchestration. That matters. Run:ai lived at the intersection of scarce compute and enterprise infrastructure pain. He’d already seen how badly teams wanted automation that could do more than surface a problem on a dashboard.

    Traction, customers, and the Series C

    ScaleOps says the product is already in live production use across enterprise environments, not stuck in pilot mode. The company names Adobe, Wiz, DocuSign, Salesforce, and Coupa among its users. It serves enterprises globally across large organizations in markets including Europe and India, and reported more than 450% year-over-year growth in the source article. It also said it tripled headcount over the last 12 months and plans to more than triple again by the end of 2026.

    The Series C totals $130 million at an $800 million valuation. Insight Partners led the round. Lightspeed Venture Partners, NFX, Glilot Capital Partners, and Picture Capital joined in again. ScaleOps says total funding is now about $210 million, and this comes roughly 18 months after its $58 million Series B in November 2024.

    How ScaleOps stacks up against Cast AI, Kubecost, and Spot

    This isn’t an empty category. Cast AI is probably the closest startup comp: it raised a $108 million Series C in April 2025 and has been pushing automation for Kubernetes, AI, and cloud workloads with a similar efficiency story. Kubecost came from a different angle—cost visibility and allocation for Kubernetes—and IBM bought it in September 2024 after its earlier venture funding. Spot Ocean, now part of Flexera after the Spot portfolio changed hands in 2025, focuses on continuous Kubernetes infrastructure optimization around cost, availability, and performance.

    ScaleOps is trying to separate itself by saying visibility isn’t enough and partial automation isn’t trusted enough. Its differentiation pitch is full autonomy, application context, and production-safe execution out of the box, without piles of manual configuration. Whether that’s truly unique is debatable. But investors are clearly backing a platform that can bridge traditional cloud optimization and the newer AI-infrastructure problem in one control layer.

    Why does this ScaleOps funding round matter?

    Because this isn’t just growth capital for sales hires.

    ScaleOps says the new money will fund new products and broaden the platform as enterprises spend more on AI infrastructure. That suggests the company wants to move from “Kubernetes cost optimizer” into something closer to an autonomous infrastructure control plane. One that handles compute, memory, storage, networking, and GPUs without constant human tuning. Shafrir’s own framing is that the company is building toward “infrastructure that manages itself,” and the roadmap now has the cash to chase that idea properly.

    For customers, the point is speed and trust. Lots of teams already know where the waste is. What they usually lack is a production-safe system willing to act on that information in real time. If ScaleOps can keep reducing manual work without breaking SLOs, the value isn’t just lower cloud bills. It’s fewer interruptions, faster incident recovery, and less time spent babysitting autoscalers.

    Investors’ thesis looks pretty clear. Insight isn’t backing another reporting layer. It’s backing software that touches one of the most expensive and least efficiently managed parts of the modern stack. And because Shafrir came from Run:ai, there’s a logic to the bet: he’s not selling “AI” as a vibe. He’s selling automation against a bottleneck he’s already worked on before.

    What market trend is driving ScaleOps funding now?

    Start with the money. Gartner forecast worldwide public cloud end-user spending at $723.4 billion in 2025, up from $595.7 billion in 2024. Infrastructure-as-a-service alone was projected to hit $211.9 billion, while platform-as-a-service was expected to reach $208.6 billion. When the spend base gets that large, even small efficiency gains become huge line items.

    Then there’s Kubernetes. CNCF’s annual survey, published in January 2026, found that 82% of container users now run Kubernetes in production. That matters because the optimization problem has shifted from early adoption to day-2 operations. Kubernetes isn’t the experiment anymore. It’s the default operating layer for a lot of modern software and, increasingly, for AI workloads too.

    AI is making FinOps messier, not simpler. The FinOps Foundation’s 2025 report surveyed organizations responsible for more than $69 billion in cloud spend and found that 97% were investing in multiple infrastructure areas for AI. Spend isn’t just rising. It’s spreading across more resource types, which makes static rules and siloed tools even less useful than they were a couple of years ago.

    Final take on ScaleOps funding

    The interesting part of this ScaleOps funding round isn’t the size by itself. It’s that investors are backing a harder claim: that cloud and AI infrastructure should be managed automatically, in production, with enough context to avoid the outages and performance hits that make operators distrust automation in the first place.

    That’s ambitious. And honestly, it should be.

    The next thing to watch is whether ScaleOps can turn that promise into a broader platform for AI-era infrastructure—not just cheaper Kubernetes clusters, but software enterprises are willing to let touch their most expensive compute in real time.

    Read how Rebellions AI chip startup raises $400M for IPO to scale inference chips and expand into global AI infrastructure markets

    FAQ

    What is the latest ScaleOps funding round?

    ScaleOps just raised $130 million in a Series C round led by Insight Partners at an $800 million valuation. Existing investors Lightspeed Venture Partners, NFX, Glilot Capital Partners, and Picture Capital also participated, bringing total funding to about $210 million.

    How does ScaleOps work for Kubernetes and AI workloads?

    ScaleOps continuously adjusts infrastructure in production instead of relying on fixed configurations. It rightsizes CPU and memory, optimizes replicas and nodes, increases spot usage, and now extends that logic to GPU sharing and GPU-aware scaling for AI workloads.

    Who founded ScaleOps? 

    ScaleOps was co-founded in 2022, with Yodar Shafrir serving as founder and CEO. Before that, he worked at Run:ai, where he held engineering roles focused on AI orchestration, which gave him direct experience with the compute-efficiency problems ScaleOps is now trying to solve.

    Is ScaleOps a FinOps company or a Kubernetes infrastructure company?  

    It sits between those categories. ScaleOps clearly overlaps with FinOps because it targets cloud and AI infrastructure waste, but the product behaves more like an autonomous Kubernetes and AI infrastructure operations layer than a classic cost-reporting tool.

  • Rebellions AI Chip Startup Raises $400M for IPO

    Rebellions AI Chip Startup Raises $400M for IPO

    Rebellions is a South Korean fabless semiconductor company building AI inference chips and rack-scale infrastructure for data centers. The Rebellions AI chip startup has now pulled in another $400 million in pre-IPO financing, lifting total funding to $850 million and valuing the business at about $2.34 billion. The pitch is straightforward: training flashy AI models gets the headlines, but running those models cheaply, fast, and inside real data-center power limits is where a lot of the money will be made. Co-founder and CEO Sunghyun Park started the company in 2020, and Rebellions is now pressing harder into the U.S. while widening its footprint across Asia and the Middle East.

    This round lands as Rebellions shifts from selling chips to selling a fuller inference stack.

    What does the Rebellions AI chip startup actually sell?

    Rebellions no longer looks like a chip designer that stops at silicon. Its latest offer is a full inference platform: accelerators, server and rack hardware, networking, and software for deploying models in production. The newly announced RebelRack is a ready-to-deploy unit of inference compute. RebelPOD links multiple racks into a larger cluster for enterprises that need more throughput without rebuilding everything from scratch.

    For customers, the workflow is a lot closer to modern infrastructure than old-school semiconductor buying. Its stack is cloud-native, built around Kubernetes, and works with PyTorch, vLLM, Triton, Hugging Face, and OpenShift. Teams can bring existing model-serving workflows onto Rebellions hardware without getting trapped in a proprietary environment or rewriting an application stack from zero.

    The hardware piece is more concrete than the press-release gloss suggests. Its ATOM-Max POD starts from an 8-server Mini POD and uses 400GB/s RDMA networking. It can scale to 64 NPUs per POD, with multi-rack expansion handled through what the company calls Rebellions Scalable Design. That means Rebellions is trying to remove a bunch of miserable manual work at once. Cluster design and accelerator integration are part of it. So are interconnect tuning, deployment tooling, and observability.

    Before this, buyers often had to stitch together chips, servers, networking, compilers, and model-serving software from different vendors. Rebellions is basically saying: buy the rack, deploy the models, keep the open-source tooling, and stop babysitting the plumbing. The product direction is clear.

    Who founded the Rebellions AI chip startup?

    Founding story and founder fit

    Sunghyun Park co-founded Rebellions in 2020 after a career that mixed chip design with financial systems. He holds a Ph.D. from MIT’s CSAIL, worked at Intel and SpaceX, and later became a quant developer at Morgan Stanley in New York. That mix matters. Rebellions wasn’t born from generic “AI will be huge” hype; Park has described seeing how custom silicon could push latency-sensitive workloads harder than general-purpose hardware.

    Park didn’t start the company alone. Rebellions’ founding team also included Jinwook Oh, a KAIST alumnus who previously worked as a principal designer at IBM Research in New York and now serves as CTO. Between Park’s systems background and Oh’s chip-design credentials, the company had the kind of founder-market fit deep-tech investors usually want.

    Execution record before the current push

    The company’s early product path was pretty disciplined. Ion, launched in 2021, targeted edge and finance use cases. Atom came later as the data-center part of the portfolio, with TechCrunch reporting that it was built for language models up to 7 billion parameters, while the newer Rebel line was designed for bigger generative AI workloads. By 2025, Rebellions had ATOM and ATOM-Max in mass production and deployed with customers across Japan, Saudi Arabia, and the U.S. It also powered Korea’s largest commercial AI service.

    That’s a better track record than a lot of AI chip startups manage. Plenty raise money. Fewer ship. Fewer still get commercial deployments outside their home market.

    Fundraising, expansion, and the current balance sheet

    Mirae Asset Financial Group and the Korea National Growth Fund led this latest $400 million pre-IPO round. It came after Rebellions’ $124 million Series B in January 2024 and a $250 million Series C expansion late in 2025, taking cumulative funding to $850 million. The new round also values the company at roughly $2.34 billion, with $650 million of that total raised in the last six months alone.

    Rebellions is using that capital for U.S. expansion and scaled production of its Rebel100 platform. It’s also putting money into software and systems work, along with IPO preparation. Marshall Choy, who joined from SambaNova in late 2025, is leading the North American push through Rebellions’ U.S. entity and broader go-to-market effort.

    How does Rebellions compare with Nvidia alternatives?

    Rebellions is attacking Nvidia from the same angle a lot of newer infrastructure players are: inference, not training. Even inside that subgroup, the field is crowded. Groq is pushing fast, low-cost inference and on-prem GroqRack deployments. SambaNova is selling turnkey inference systems and rack products for data centers. Tenstorrent is taking a broader architecture play across AI hardware and CPUs, with chiplet partnerships of its own.

    Rebellions’ differentiation is a little more specific. It’s betting that customers want energy-efficient inference hardware and open-source software compatibility. It also wants modular systems that can scale from a single deployable rack to a clustered POD. The company is also leaning into sovereign and regional AI demand, especially in Asia and the Middle East, where buyers care about power efficiency, control, and supply-chain alternatives, not just benchmark chest-thumping.

    Why does this $400M Rebellions round matter?

    Because this isn’t just more venture money for a chip startup. It’s financing for a change in business model.

    Rebellions is moving beyond components into packaged infrastructure. That usually means better margins if it works, but it also means more execution risk. You have to manufacture and support a much bigger system. You also have to integrate it and sell it in markets like the U.S., where buyers already have options and very little patience.

    The investor mix says something, too. Mirae Asset has backed foundational technology companies before, and the Korea National Growth Fund made Rebellions its first investment under a national push to back strategic AI and semiconductor players. That’s not normal startup signaling. It suggests Rebellions is being treated not just as a venture bet, but as industrial policy with a cap table.

    Park’s line about AI being judged by operation, not just model quality, is the right framing here: “at scale, under power constraints, and with clear economic return.”

    How big is the AI inference chip market?

    It’s already huge, and it’s still getting bigger. Grand View Research estimates the global AI inference market at $113.47 billion in 2025 and projects it will reach $253.75 billion by 2030, a 17.5% compound annual growth rate. That’s one reason so many hardware vendors are now talking less about training clusters and more about inference economics.

    A broader chipset view tells the same story. Grand View Research values the global AI chipset market at $56.82 billion in 2023 and sees it reaching $323.14 billion by 2030, with Asia Pacific flagged as the fastest-growing region. It also notes that inference held the largest market share in 2023, helped by rising demand for cloud AI services and edge deployments that have to work within tight power and thermal limits.

    That timing lines up with Rebellions’ thesis. As more enterprises move from experimenting with models to actually serving them, chips that deliver better performance per watt start to matter a lot more than abstract AI bragging rights.

    Should you take Rebellions seriously now?

    Yes — but with the right level of skepticism.

    The Rebellions AI chip startup has real money, real hardware, real deployments, and a founder who looks credible in semis. That puts it ahead of a long list of venture-backed chip stories that never got past slides and samples. Still, the next test isn’t fundraising. It’s whether Rebellions can turn RebelRack, RebelPOD, and its U.S. expansion into repeatable commercial wins before the IPO window opens.

    Read how Dugar Finance funding lands $5M for MSME loans to scale secured lending across tier 2 to tier 6 markets

    FAQ

    What is Rebellions raising money for?

    Rebellions is raising money to scale production, expand in the U.S., deepen its software and systems stack, and prepare for a future IPO. The latest pre-IPO round closed on March 30, 2026, and came after major financing events in January 2024 and late 2025.

    How does Rebellions’ product work for enterprise AI inference?

    It works as a full inference stack, not just a standalone chip sale. Rebellions combines accelerators and rack-scale hardware. It also includes RDMA networking and software that supports tools like PyTorch, vLLM, Triton, and Hugging Face, so customers can deploy production inference workloads with less integration pain.

    Who founded Rebellions? 

    Rebellions was founded in 2020 by Sunghyun Park and a team that included CTO Jinwook Oh . Park has a Ph.D. from MIT CSAIL and experience at Intel, SpaceX, and Morgan Stanley, while Oh came from IBM Research and brought heavyweight chip-design credentials of his own.

    Is Rebellions in the AI chip market or the broader AI infrastructure market?

    It’s now in both, and that’s the point. Rebellions started as an AI chip company focused on inference accelerators, but its newer RebelRack and RebelPOD products push it into the broader AI infrastructure category where buyers want deployable systems, not just silicon.

  • Dugar Finance Funding Lands $5M for MSME Loans

    Dugar Finance Funding Lands $5M for MSME Loans

    Dugar Finance, a Chennai-based NBFC that lends to commercial vehicle buyers and secured MSME borrowers, has raised $5 million in a pre-Series A round led by HegdInvst. The Dugar Finance funding round matters because entrepreneurs in tier 2 to tier 6 markets still need formal credit for income-generating assets, not flashy consumer finance. Founded in 1987 by Ramesh Dugar, the company now wants to use that vehicle-finance base to build a broader secured lending business across smaller towns and rural markets.

    HegdInvst — a Category II AIF focused on growth equity — is backing that next step. The new money will go into expanding secured MSME lending alongside its older vehicle finance business. It will also pay for technology infrastructure, analytics-led underwriting, centralized risk systems, and senior hires.

    What does Dugar Finance actually do?

    At the simplest level, Dugar Finance is a secured lender for borrowers that bigger institutions often underserve. Its product stack includes MSME and working-capital loans, vehicle loans, EV financing, rooftop solar loans, and mortgage loans. For a customer, the journey is pretty direct: consultation and application. Then come doorstep document collection, verification, approval, disbursal, and ongoing repayment support.

    That matters because the company isn’t selling one generic loan. An MSME borrower can use the money for inventory, equipment upgrades, or cash-flow needs. A vehicle borrower is often financing an asset that produces income right away. The green-finance products — EV and solar loans — widen the book without pushing the company into unsecured territory.

    The user experience is still branch-led and relationship-heavy, but it’s trying to reduce the usual friction. Dugar highlights minimal documentation and quick approvals. It also offers online EMI payments, branch discovery tools, and doorstep paperwork collection. That’s basic stuff on paper. In smaller markets, basic execution is the product.

    That’s also where the company’s underwriting push fits. When management says this fresh capital will strengthen analytics-led underwriting and centralized risk, it’s saying the next growth phase can’t run only on local relationships and branch know-how. It needs systems.

    Who founded Dugar Finance and how has it grown?

    How the company started

    Dugar Finance traces its roots to 1987, when it began lending as an RBI-registered NBFC in Chennai. The company was built under the leadership of Ramesh Dugar, with Sonali Dugar also listed among the promoter-directors. For most of its life, the business was known first for vehicle finance. That long history matters because Dugar isn’t a brand-new fintech trying to learn collections, collateral, or rural sourcing on the fly.

    Why the founders fit this market

    Ramesh Dugar’s credibility here comes less from startup flash and more from old-school domain depth. He has more than 3 decades in financial services and has held leadership roles in trade and lending bodies including the South India Hire Purchase Association and the Madras Hire Purchase Association. Sonali Dugar has handled marketing and human resources. She has also worked on corporate strategy inside the company. In a branch-heavy lender, that matters.

    The second line of leadership is also a clue about where this business wants to go. CEO S. Rangaraj came in after long stints with TVS and Amalgamations. CFO Narayanan previously worked at Sundaram Home Finance. Independent directors include former senior bankers from Union Bank of India and Karur Vysya Bank. That’s not accidental. It looks like a lender preparing itself for institutional scale.

    Traction, targets, and the live business today

    Dugar Finance is already operating, not experimenting. It has a presence across 6 states, more than 30 branches, and over 20,000 customers since inception, with a strong footprint in smaller towns. The company’s current assets under management stand at about ₹400 crore, and management wants that to reach ₹2,000 crore over the next 3 to 4 years. It also plans to expand into 10 states over the next 3 years.

    That plan is ambitious. But it isn’t vague. Management has attached operating guardrails to it — gross NPA below 2% and return on assets in the 4% to 5% range. For a lender pushing into semi-urban and rural MSME credit, those are the numbers worth watching, not just the AUM headline.

    Fundraising details

    The latest equity cheque follows a busy funding stretch. In December 2025, Dugar Finance raised about $18 million in structured debt from domestic and international lenders including Symbiotics, British International Investment, and multiple Indian banks. Before that, it had also raised $3 million from Symbiotics’ Green Basket Bond to expand EV and rooftop solar financing.

    Now comes the pre-Series A. Ramesh Dugar put the strategy plainly: “We are entering the next phase of growth, where diversification and institutional disciplined scaling become critical. Vehicle finance gave us a strong foundation, and we are now leveraging that to build a broader secured lending platform.” HegdInvst’s Aditya Bhandari framed the investor case this way: “Dugar Finance combines a solid promoter group and a clear intent towards creating a professionally run NBFC focused on Tier 2 to 6 towns. We see significant potential in its strategy to scale a well governed & diversified secured lending platform.”

    Competition and market positioning

    Dugar Finance isn’t alone in chasing smaller-ticket secured credit outside the big metros. Regional vehicle financiers such as Mahaveer Finance are serving similar borrower profiles in South India, especially in used and commercial vehicles. The larger alternative is still a mix of banks and diversified NBFCs that can fund these customers, but often with slower processes or tighter filters. For plenty of borrowers, the real incumbent is still the informal lender down the road.

    So where does Dugar sit? Somewhere between relationship-led local financiers and more formal institutional lenders. Its pitch is secured credit and faster processing. It also leans on doorstep documentation and a sharper focus on tier 2 to tier 6 towns. Investors are backing that middle ground because it can scale if credit quality holds.

    Why does this Dugar Finance funding round matter?

    Because this round changes what the company can become.

    Until now, Dugar Finance looked like a long-running vehicle financier that had started branching into MSME and green-credit products. After this raise, it looks more like a lender deliberately re-architecting itself into a broader secured lending platform. The emphasis on underwriting analytics, centralized risk, and senior hiring tells you the company knows branch expansion alone won’t be enough.

    It also matters for customers. If Dugar pulls this off, borrowers in smaller towns get a lender that can still originate locally but approve more consistently and manage risk more professionally. That’s a better outcome than the usual trade-off between informal speed and bank-grade paperwork.

    For HegdInvst, the thesis is pretty clear. This isn’t a bet on unsecured growth at any cost. It’s a bet on a seasoned promoter group, secured products, and disciplined expansion in underserved markets where distribution still matters.

    How big is the MSME lending market Dugar Finance is chasing?

    Big enough that a ₹400 crore lender can still look tiny.

    CareEdge estimates India has roughly 63 million MSMEs, with total debt demand of ₹95.6 lakh crore. Of that, ₹50.7 lakh crore is addressable through formal channels, but formal supply stood at only ₹32.4 lakh crore as of H1 FY25 — leaving a credit gap of ₹18.3 lakh crore. That’s the hole lenders like Dugar are trying to fill.

    NBFCs have been gaining ground in exactly this segment. CareEdge says NBFC MSME lending grew at a 32% CAGR from FY21 to FY24, and their MSME AUM is expected to cross ₹5.3 lakh crore by FY26. In micro-LAP loans below ₹10 lakh ticket size, NBFCs held more than 45% market share as of September 2024. That’s a useful backdrop for Dugar’s secured MSME push.

    There’s also a timing angle here. Secured MSME lending is benefiting from formalization and better digital credit rails. The market has also become more cautious on unsecured risk. So the move from vehicle finance into secured MSME loans doesn’t look random. It looks like a lender following where risk-adjusted growth still exists.

    Can Dugar Finance funding turn into scale?

    It can. But only if underwriting stays boring.

    Dugar Finance funding gives the company more than growth capital. It gives it a shot at becoming a more diversified, more institutional lender without abandoning the smaller-town borrowers that built the franchise. The next thing to watch is simple: can Dugar grow from ₹400 crore to ₹2,000 crore in AUM while keeping GNPA under 2% as it expands from 6 states to 10?

    Read how Gnani.ai raises $10M Series B to scale Inya voice AI platform across enterprise workflows, multilingual automation, and global markets

    FAQ

    What is the latest Dugar Finance funding round?

    Dugar Finance has raised $5 million in a pre-Series A round led by HegdInvst. The capital is earmarked for expanding secured MSME lending and deepening its vehicle finance business. It will also fund risk and underwriting systems, along with senior talent for the next phase of growth.

    How does Dugar Finance work for borrowers?

    Dugar Finance offers secured loans across MSME credit, vehicle finance, EV and solar loans, and mortgage products. The process starts with need assessment and moves through application and doorstep document collection. Then come verification, approval, disbursal, and repayment support.

    Who founded Dugar Finance?

    Dugar Finance was founded in 1987 in Chennai under the leadership of Ramesh Dugar, who remains the founder and managing director. The promoter group also includes Sonali Dugar, and the wider leadership bench now includes veterans from TVS, Sundaram Home Finance, Union Bank of India, and Karur Vysya Bank.

    Is Dugar Finance a fintech or a traditional NBFC?  

    It’s better described as a traditional NBFC that’s becoming more tech-enabled. The company still runs a branch-led, relationship-heavy model across 6 states, but this round is being used to build analytics-led underwriting, centralized risk systems, and a more scalable secured lending platform.

  • Gnani.ai Raises $10M Series B to Scale Inya Voice AI Platform

    Gnani.ai Raises $10M Series B to Scale Inya Voice AI Platform

    Gnani.ai builds voice-first AI software for enterprises that want to automate customer conversations across calls, chat, and digital workflows. Its latest Gnani.ai funding update is a $10 million Series B led by Aavishkaar Capital, with existing backer InfoEdge Ventures also joining the round. A lot of large businesses still run customer support on a messy mix of legacy IVR systems, BPO-heavy operations, and global AI tools that often stumble on noisy, multilingual Indian speech. Founded in 2016 by Ganesh Gopalan and Ananth Nagaraj, the Bengaluru-based startup is trying to turn that pain point into a full-stack enterprise AI business built around its new platform, Inya.

    What does Inya do after Gnani.ai funding?

    Inya is Gnani.ai’s full-stack agentic AI platform for building, deploying, and managing enterprise AI agents across voice and digital channels. In practice, that means a company can use a no-code builder to set up workflows and connect a knowledge base. It can choose models for different tasks, plug into existing enterprise software, and let AI agents handle lead qualification, status updates, complaint logging, renewals, scheduling, collections, and live agent assist. The system supports multilingual, low-latency conversations. It also hands work off cleanly when a human needs to step in.

    What makes Inya more interesting than a standard bot builder is the stack underneath it. Gnani.ai doesn’t just sit on top of someone else’s speech layer. Its VoiceOS roadmap combines speech recognition and speech synthesis. It also brings language understanding, orchestration, and model selection into one system, so enterprises aren’t stitching together 5 vendors every time they want a working voice workflow. Inya is also model-agnostic, so customers can use Gnani.ai’s smaller in-house models or route subtasks to outside models when that makes more sense.

    The newer speech models fill in the technical pieces. Vachana STT is trained on more than 1 million hours of voice data and is designed for code-mixed speech, regional accents, noisy audio, and compressed telephony traffic. It supports streaming and batch transcription. It works across 12 Indian languages and can run with on-premise deployment for enterprises that care about tighter data control. Vachana TTS adds neural speech synthesis and voice cloning. Gopalan said the product can “voice clone a person in 6 seconds” and make that voice speak in multiple languages even if it was trained in only one.

    For customers, the difference is straightforward. Before Inya, teams usually bought separate ASR and TTS tools. They added analytics, bot logic, and CRM connectors, then spent months integrating all of it. With Inya, the pitch is simpler: build once, connect fast, and deploy across voice and digital touchpoints. Analytics, compliance, handoffs, and automation sit in one operating layer.

    Who founded Gnani.ai and how is it positioned?

    The founding story

    Gnani.ai was founded in 2016 by Ganesh Gopalan and Ananth Nagaraj. The company started in voice AI long before “agentic AI” became this year’s favorite label, and that timing matters. These founders weren’t chasing a trend after ChatGPT. They were building around a harder problem: how to make enterprise voice systems work in Indian languages, over imperfect networks, inside regulated sectors like banking and telecom.

    That early bet now looks smart.

    Why the founders fit this market

    Gopalan is the CEO and brings a mix of strategy, operations, marketing, and technical experience. He has 25 years of experience, and his earlier stint at Texas Instruments helps explain why Gnani.ai has always sounded more like an infrastructure company than a flashy app startup. He also studied at the Indian School of Business, which gives him the mix investors like in B2B founders: technical proximity and commercial discipline.

    Nagaraj, the CTO, is the builder on the engineering side. He previously worked as an applications engineer at Texas Instruments and as a senior software engineer at Aricent Group. He also co-founded 300 Feet Eco Solutions and holds a BE in Electronics and Communications from Visvesvaraya Technological University. It’s a credible background for someone now building multilingual speech systems and enterprise integrations. Low-latency voice infrastructure too.

    Traction, launches, and the fundraise

    Gnani.ai unveiled Inya at the India Impact AI Summit 2026 in February 2026. By Gopalan’s count, the platform has already signed more than 150 customers. Across the wider business, Gnani.ai serves more than 200 enterprises in sectors including BFSI, telecom, ecommerce, consumer internet, and healthcare.

    The rollout around Inya has been busy. In December 2025, the company launched Vachana STT under the IndiaAI Mission. The model was trained on more than 1 million hours of voice data and would become part of its upcoming VoiceOS stack. The speech model already processes about 10 million calls a day with p95 latency of roughly 200 milliseconds. That’s good proof this isn’t just demoware.

    Gnani.ai was also selected under the IndiaAI Mission to build a 14 billion-parameter voice AI foundational model focused on multilingual, real-time speech processing with reasoning capabilities. That’s a meaningful signal. Government-backed compute and visibility don’t guarantee product success, but they do help a startup trying to build sovereign voice infrastructure instead of just wrapping foreign APIs.

    On fundraising, the company has now raised $10 million in a Series B round led by Aavishkaar Capital, with InfoEdge Ventures participating again. The money is earmarked for 3 things: entering new verticals, expanding into global markets, and putting more fuel behind R&D and hiring.

    Competition and market positioning

    Gnani.ai isn’t alone. Enterprises looking for conversational automation can also look at players like Uniphore, Haptik, Gupshup, and other customer-engagement platforms that mix voice, chat, analytics, and automation. A lot of those systems came up through different wedges, though. Some started in contact-center analytics. Others grew out of messaging APIs or chat-led automation. Gnani.ai has stayed stubbornly voice-first.

    That’s where the differentiation sits. Gnani.ai is trying to sell a full stack for enterprise voice automation in Indian and multilingual contexts: speech recognition, synthesis, orchestration, agent assist, analytics, biometrics, and no-code agent building in one architecture. It also pushes features that matter to large enterprises more than to startup buyers. On-prem deployment. Low latency. Multilingual coverage. Compliance badges. Deep workflows for BFSI and support operations.

    The legacy alternative is even more fragmented. Old-school IVR trees, outsourced call centers, rule-based bots, and custom integrations still dominate plenty of enterprise support flows. Gnani.ai’s bet is that businesses are ready to replace that patchwork with a single voice AI platform that can act, not just answer.

    Why does Gnani.ai funding matter right now?

    This round gives Gnani.ai a chance to move from “strong voice AI vendor” to “broader enterprise AI platform company.” That’s not a cosmetic shift. It requires more product depth and more integrations. It also requires more deployment talent and a much larger sales motion than a narrow speech-tech business.

    The company is trying to make that jump at the right moment. Inya already has early customer adoption, and the surrounding speech stack is live enough to show serious operational use. So the new capital isn’t going into a concept slide. It’s being used to widen distribution, build out product, and hire people who can take a voice-first core into new industries and international markets.

    Aavishkaar Capital’s lead also says something about the investor thesis. This isn’t a pure frontier-model bet. It’s a business bet on enterprise deployment—on whether Indian companies and then overseas customers will pay for AI that handles messy, high-volume customer interactions better than legacy systems do. InfoEdge Ventures staying involved adds another layer of conviction.

    Why is India’s voice AI market growing so fast?

    The macro backdrop is loud. The Indian AI market is projected to become a $126 billion opportunity by 2030, and AI is expected to contribute as much as $1.7 trillion to India’s GDP by 2035. Those are the kinds of numbers every startup deck loves. In voice AI, though, there’s a real operational story behind them. India is multilingual, mobile-first, call-heavy, and full of businesses that still rely on voice as the main customer interface.

    Policy is pushing too. The IndiaAI Mission has a ₹10,300 crore budget over 5 years and has been expanding access to compute infrastructure, with 38,000 GPUs aggregated under the program. That matters because companies like Gnani.ai aren’t just reselling software seats. They’re training and serving AI systems that need local data, local optimization, and enough compute to run at enterprise scale.

    There’s also a product shift happening under the surface. A year ago, a lot of enterprise AI spending was still pilot money. Now buyers want automation that can plug into CRM systems, handle regulated workflows, and speak naturally across channels. That’s why voice AI, multilingual AI agents, and enterprise-grade conversational systems are getting a lot more attention than generic chatbot demos.

    What should you watch after Gnani.ai funding?

    The next test for Gnani.ai funding isn’t whether the company can raise again. It’s whether Inya becomes sticky outside the early wave of adopters and whether Gnani.ai can turn its voice advantage into a larger enterprise software business.

    That means 3 things are worth watching: global customer wins, deeper penetration beyond BFSI and telecom, and evidence that its full stack—especially VoiceOS and Inya—can keep latency low while scaling across more workflows and languages. If that happens, Gnani.ai funding won’t look like another routine Series B. It’ll look like the round that turned a speech-tech startup into a serious enterprise AI contender.

    Read how Digital Lending Platform Uncia raises $3M from Pavestone to scale enterprise lending software across India, MENA, and North America

    FAQ

    What is the latest Gnani.ai funding round?

    Gnani.ai has raised $10 million in a Series B round led by Aavishkaar Capital, with InfoEdge Ventures also participating. The company announced the round after launching Inya at the India Impact AI Summit 2026 and said the money will go toward new verticals, global expansion, R&D, and hiring.

    How does Inya work for enterprise customers?

    Inya is a no-code agentic AI platform that lets enterprises build AI agents for voice and digital channels, connect them to internal systems, and manage workflows from one place. It supports model orchestration and multilingual conversations. It also offers knowledge-base access and integrations with more than 100 enterprise tools, so teams don’t have to bolt together separate speech and automation vendors.

    Who founded Gnani.ai? 

    Gnani.ai was founded in 2016 by Ganesh Gopalan and Ananth Nagaraj. Gopalan came in with leadership and go-to-market experience that included Texas Instruments, while Nagaraj brought engineering depth from Texas Instruments, Aricent, and an earlier co-founding stint at 300 Feet Eco Solutions.

    Is Gnani.ai a voice AI company or a broader enterprise AI platform?  

    It started as a voice AI company, but it’s now trying to become a broader enterprise AI platform built around voice-first automation. That’s why the stack now spans speech recognition and text-to-speech. It also includes analytics, agent assist, biometrics, and Inya’s agent orchestration layer rather than a single-point product.

  • Digital Lending Platform Uncia Raises $3M From Pavestone

    Digital Lending Platform Uncia Raises $3M From Pavestone

    Uncia builds software for banks and NBFCs that handles loan origination and loan management. It also runs supply chain finance behind the scenes. The Chennai-based startup has now raised $3 million, or about ₹25 crore, from Hyderabad-based Pavestone in a move that puts this digital lending platform story squarely in the enterprise-fintech category, not the usual consumer-loan hype cycle. A lot of lenders still run critical back-office work on old, stitched-together systems that slow approvals, servicing, and product launches. Founded in 2020 by Hari Padmanabhan, Uncia plans to use the new capital to deepen its India business and push into MENA and North America in its first external funding round.

    What does Uncia’s digital lending platform actually do?

    At a practical level, Uncia gives lenders a software stack that can take a borrower from application to sanction and then keep the loan running after disbursal. Its loan origination system, UnciaPrime, is built around a low-code journey designer and workflow orchestration. It also includes a business rules engine. That means a bank can set up who touches a file, what data gets pulled in, which credit rules apply, and what happens next without rebuilding the whole system each time. The platform also plugs into OCR for documents and bank-statement analysis. It supports e-sign, e-mandate, marketplaces, and account-aggregator style flows through 100+ APIs.

    That’s the front end of the machine. The back half sits in UnciaLeap, the company’s loan management system. Uncia pitches it as a single LMS that can handle multiple lines of business from retail and SME lending to agri and supply chain finance. The useful part isn’t the acronym soup. It’s that lenders don’t need one servicing stack for one product and another for the next. Collections and repayment structures can live on the same application layer. So can working-capital loans, project finance, and term loans. That cuts setup time and makes product launches less painful.

    Then there’s UnciaFlow, which is where the company gets more interesting. Supply chain finance is messy because buyers, suppliers, dealers, and lenders all need to transact in one system while limits, approvals, and invoice flows stay in sync. UnciaFlow is designed as a tri-party platform with straight-through processing and predefined product templates. It also includes a standalone limit-management layer and a configuration tool called UnciaStudio. The idea is simple: lenders can onboard anchors and counterparties, launch new programs, tweak rates and access controls, and push out products faster without asking a vendor to rewrite code every week.

    And that speed pitch isn’t abstract. Uncia says UnciaFlow comes with 51 predefined SCF product variations, 70+ API integrations, and a 30-60 day go-live design. In one case, its Unity Bank implementation was completed in less than 100 days. Investors will watch that closely. “Rapid deployment” sounds great until it has to work across multiple geographies and regulatory regimes.

    How did Uncia build this digital lending platform?

    The founding story

    Uncia was founded in Chennai in 2020, though the company was previously known as ThemePro Technologies. From day 1, the bet was pretty specific: build lending software for institutions, not another flashy consumer-fintech front end. That matters because SME finance, housing finance, and supply chain finance all create ugly operational work that doesn’t get solved by a prettier app alone. Uncia went after the pipes.

    Why Hari Padmanabhan fits this market

    Padmanabhan isn’t a first-time founder learning lending on the fly. Before Uncia, he founded INSYST in Dubai and helped build software products for BFSI markets across the Middle East and beyond. He later served in a senior leadership role at 3i Infotech after INSYST’s acquisition, and has also been associated with Encore, TrackIT Solutions, and Indus OS. So when Uncia talks about deep domain experience in enterprise finance software, that part checks out.

    Execution before fundraising

    The company’s build-first approach is probably the most credible part of this round. Uncia says its platforms already process more than ₹2 lakh crore in AUM for customers including ICICI Home Finance, TVS Credit, Mahindra Finance, and IDFC First Bank. That doesn’t mean global expansion will be easy. But Pavestone isn’t backing a slide deck.

    Uncia has also stacked up some early proof points in product recognition. In 2024, IBS Intelligence recognized UnciaLeap in retail lending and UnciaFlow in supply chain finance implementation. Awards don’t replace revenue. Still, in enterprise software, they help when a founder is trying to sell conservative financial institutions on a newer platform.

    The fundraising details

    This $3 million round from Pavestone is Uncia’s first outside capital. The company said the money will go into expansion in India first, then into MENA and North America. That’s a sensible order. India gives it reference customers and category fit. Overseas markets demand local compliance knowledge, deeper sales cycles, and far more patient execution.

    Padmanabhan framed the raise less like a launch and more like a scale-up moment:

    We made a deliberate choice to build before we raised. This funding is not a beginning but a gear shift. We have the product. We have validation at scale and diversity. Now we have the capital to take this to the world.

    Competition and market positioning

    Uncia isn’t entering an empty category. In India and broader lending tech, established names like Nucleus Software, Pennant Technologies, and Newgen already sell loan-origination and servicing stacks to financial institutions. On the newer cloud side, lenders can also look at focused players like CloudBankin or FinStack.

    So where does Uncia try to stand apart? Mostly on architecture and delivery. It’s pitching pure-play SaaS and multi-tenant deployment. It also emphasizes microservices, a self-serve configuration layer, and quicker go-live cycles than the old one-time-license model that still haunts plenty of banking software deals. Because it spans origination, management, and supply chain finance in one suite, it can sell a broader operating stack instead of a single narrow workflow. That’s useful for lenders that want fewer vendors, especially in India, where many institutions are modernizing in phases rather than replacing everything in one shot.

    Why does this digital lending platform round matter?

    A lot of startup rounds are really survival rounds with nicer press-release language. This one doesn’t read that way.

    Uncia spent about 5 years building product, landing institutional customers, and only then taking external money. That changes how the round should be read. Pavestone isn’t being asked to finance basic product creation. It’s funding distribution and geographic expansion. It’s also backing the boring but essential work that comes with enterprise software: implementation teams, partner channels, compliance adaptation, and customer support in new markets.

    It also matters for customers. Banks and NBFCs don’t just buy software features; they buy vendor durability. A first institutional funding round can make a younger software provider look a lot safer when procurement, IT, and risk teams are deciding whether to trust it with core loan operations. That’s especially true when the company wants to pitch itself as a long-term back-office platform rather than a bolt-on tool.

    What market is pushing digital lending platform demand?

    The macro setup is doing Uncia a favor. Chiratae Ventures and The Digital Fifth estimate India’s enterprise fintech market could reach about $20 billion by 2030, with lendingtech sitting inside a broader shift toward digitized product, sales, and servicing workflows across BFSI. The same Chiratae research has also projected India’s digital lending market could grow to a $515 billion book size by 2030.

    Why now? Banks and financial institutions are moving toward much deeper digitization in retail and MSME lending, and the plumbing underneath those journeys is finally becoming strategic. IndiaStack rails, evolving digital-lending rules, account-aggregator style data flows, and pressure to cut turnaround time are pushing lenders to spend on infrastructure, not just customer-facing apps. That’s good news for enterprise fintech vendors. Buyers are getting pickier too. A platform now has to be configurable, compliant, and fast to deploy — not just modern-looking.

    What to watch after Uncia’s digital lending platform raise

    Uncia has already cleared one important hurdle: it built enough product and customer trust to raise its first outside round on the back of real lending operations, not just ambition. That’s solid.

    Now comes the harder part. If this digital lending platform can turn Indian reference wins into repeatable playbooks for MENA and North America, the company gets a very different valuation story. If not, it stays a strong domestic lending-tech vendor. Either way, the next thing to watch isn’t the headline amount. It’s implementation quality outside its home market.

    Read how Starcloud raises $170M at $1.1B valuation to build data centers in space and scale orbital computing infrastructure.

    FAQ

    What funding did Uncia raise?

    Uncia raised $3 million, or about ₹25 crore, from Hyderabad-based Pavestone in its first external funding round. The deal was announced on March 27, 2026, and the company said the money will support expansion in India as well as market entry into MENA and North America.

    How does Uncia’s platform work for lenders?

    Uncia sells a modular lending stack, not a single-point tool. UnciaPrime handles origination and underwriting workflows. UnciaLeap manages post-disbursal servicing. UnciaFlow runs supply chain finance programs with features like predefined product templates, API integrations, and self-serve configuration.

    Who founded Uncia? 

    Uncia was founded in 2020 by Hari Padmanabhan, a longtime enterprise-software operator with deep exposure to BFSI technology. His earlier track record includes INSYST, a senior leadership role at 3i Infotech, and involvement with businesses such as ThemePro, TrackIT, and Indus OS.

    Is Uncia a fintech lender or a fintech SaaS company?

    Uncia is a fintech SaaS company in lendingtech, not a lender that underwrites loans on its own balance sheet. It sells enterprise software to banks and NBFCs, which places it in the same broad market shift that Chiratae Ventures and The Digital Fifth expect could help push India’s enterprise fintech opportunity to about $20 billion by 2030.

  • Starcloud Raises $170M at $1.1B Valuation to Build Data Centers in Space

    Starcloud Raises $170M at $1.1B Valuation to Build Data Centers in Space

    Starcloud builds orbital computing satellites that process data in space and aim to grow into full-blown off-world cloud infrastructure. Its latest Series A values the company at $1.1 billion, a striking number for a startup pitching space data centers just 17 months after Y Combinator demo day. The pitch is simple: Earth is running into power, land, and political constraints as AI data centers scale. Philip Johnston, Ezra Feilden and Adi Oltean founded Starcloud in 2024 in Redmond, Washington.

    What is Starcloud and how do its space data centers work?

    Starcloud’s product is basically orbital compute infrastructure. A customer with a satellite or space station can send raw data to Starcloud’s spacecraft and run GPU-heavy processing in orbit. It can store results there, then send down smaller, more useful outputs instead of dumping huge raw files back to Earth. Starcloud-2 delivers this through a smallsat with a GPU cluster, persistent storage, 24/7 access, and custom thermal and power systems. The company plans to make it fully operational in sun-synchronous orbit by 2027.

    Starcloud already has its first proof point in orbit. Starcloud launched Starcloud-1 in November 2025 with the first Nvidia H100 GPU in orbit. In December 2025, that satellite ran a version of Gemini in space and became the first spacecraft to train an LLM in orbit using nanoGPT. That matters less as a stunt than as a hardware test: if a terrestrial AI chip can survive launch and operate in space, the roadmap stops sounding totally ridiculous.

    For spacecraft operators, the before-and-after is pretty clear. Before Starcloud, an Earth observation company often has to downlink massive data sets to ground stations, wait for bandwidth, then process the payload on Earth. With Starcloud, the bet is that a lot of that work gets done in orbit first. That cuts latency and avoids wasting bandwidth on raw data. That’s why Starcloud’s first satellite is already analyzing data from Capella Space’s radar spacecraft, and why the company talks about satellite data processing as its first real business rather than chasing giant AI training jobs on day 1.

    There’s a second customer type built into the plan. Starcloud-2 isn’t only pitched to in-space users. It’s also framed as a secure, sovereign cloud node for terrestrial users who want storage and compute that sit outside any single country’s physical infrastructure. That’s niche today. Still, it shows where Starcloud wants to go: not just edge compute for satellites, but orbital data centers that can eventually pull workloads away from Earth.

    Who founded Starcloud and why build space data centers?

    The founding story

    Johnston’s pitch has always been blunt: if AI keeps scaling, Earth-bound data centers will slam into energy and permitting limits. Starcloud was built around that thesis in 2024, first with smaller orbital compute missions and then with a longer-term plan to launch much larger platforms once heavy-lift economics improve. Y Combinator’s profile shows the company was already designing a “micro data center” for a 2026 launch and a larger “Hypercluster” for the Starship era.

    Why this team has unusual founder-market fit

    Johnston isn’t a random AI founder trying on a space startup costume. He’s a second-time founder who previously worked at McKinsey on satellite projects for national space agencies, and he has degrees from Harvard, Wharton, and Columbia, plus a CFA charter. It’s an odd résumé for a rocket-adjacent company. But that mix of aerospace, policy, and capital markets fits a business that lives or dies on both engineering and launch economics.

    Feilden brings the spacecraft side. He spent about a decade in satellite design, with work at Airbus Defence & Space and Oxford Space Systems, including missions tied to NASA’s Lunar Pathfinder. His background in deployable solar arrays and large structures is especially relevant because Starcloud’s long-term design problem isn’t just compute. It’s also power generation and thermal management.

    Oltean rounds out the compute piece. At SpaceX, he worked on Starlink networking for in-motion use cases, including Starship-related work. Before that, he worked at Microsoft on large GPU production clusters and early LLM infrastructure, and Y Combinator says he holds more than 25 patents. That’s the kind of operator background investors like because Starcloud is trying to merge spacecraft engineering with data center engineering. Not replace one with the other.

    Early execution, fundraising, and the real competition

    Starcloud has moved fast enough to make investors tolerate a very capital-intensive story. The company has now raised $200 million in total. Benchmark and EQT Ventures led this Series A, and it closed just 17 months after demo day. Starcloud had already launched Starcloud-1, booked follow-on missions, and secured LOIs for H100 compute time in space before landing this round. It also set up payload manufacturing in Redmond. This isn’t a slide-deck company anymore.

    The competition is getting real, though it’s still messy. Aetherflux, founded by Robinhood co-founder Baiju Bhatt, has shifted from laser power transmission toward powering space data centers and expects its first data center satellite in 2027. Aethero is coming at the category from a different angle. It’s building radiation-hardened edge computers for satellites; its Jetson-based NxN module was built to hit 100 TOPS in a CubeSat-compatible form factor.

    Starcloud’s edge is simple: it has already put a terrestrial H100 in orbit, and it’s talking about commercial workloads now rather than only future architectures. The legacy alternative is still Earth. And that’s a massive incumbent. Investors aren’t backing Starcloud because space is easier. They’re backing it because if launch costs fall enough, energy economics could flip.

    Why does Starcloud’s Series A matter?

    This round matters because Starcloud isn’t raising money to polish software. It’s raising to build hardware that only starts to make sense at scale.

    Later in 2026, the company plans to launch Starcloud-2 with multiple GPUs, including an Nvidia Blackwell chip and an AWS server blade. It’ll also carry a bitcoin mining computer. That sounds a little chaotic. It’s also smart. Starcloud needs data on power, cooling, fault tolerance, and mixed workloads in orbit, not just one clean demo.

    Then comes Starcloud-3. The company is designing that spacecraft to be a 200-kilowatt, 3-ton orbital data center designed for SpaceX’s Starship deployment architecture — the “PEZ dispenser” system built for Starlink satellites. Johnston thinks that vehicle could be the first one to compete with terrestrial energy costs, with power on the order of $0.05 per kilowatt-hour if launch prices land around $500 per kilogram.

    That “if” is doing a lot of work.

    Starship still isn’t in commercial service, and Johnston has said he expects access to open in 2028 and 2029. He’s also been candid that Starcloud won’t be competitive on energy cost until Starship is flying frequently. If that slips, the fallback is to keep launching smaller versions on Falcon 9. So this Series A is really a bet on two roadmaps at once: Starcloud’s spacecraft roadmap and SpaceX’s launch roadmap.

    Johnston also broke the business model into 2 parts. Near term, Starcloud sells processing power to other spacecraft. Longer term, if launch gets cheap enough, it wants distributed orbital clusters to pull work from Earth-based data centers. Investors didn’t just fund a satellite company here. They funded a staged transition plan.

    What’s driving demand for space data centers now?

    The terrestrial data center market is a huge part of the story. In the Americas alone, operational data center capacity has reached 43.4 GW, with another 25.3 GW under construction, and nearly 89% of that pipeline was already pre-committed in early 2026. Vacancy was just 4.2%. In plain English: the market is building like crazy and still looks tight.

    It’s not just demand. It’s where and how you’re allowed to build. Cushman & Wakefield points to power availability, grid access, permitting friction, land use, and local regulation as the bottlenecks shaping the next wave of projects. That lines up almost perfectly with Starcloud’s pitch that resource and political obstacles on Earth create room for in-space computing.

    There’s still a giant reality check, though. Only dozens of advanced GPUs currently operate in orbit, while Nvidia sold nearly 4 million chips to terrestrial hyperscalers in 2025. SpaceX’s Starlink network may be the biggest satellite constellation ever built, but even that produces only around 200 megawatts of power. Meanwhile, terrestrial facilities with more than 25 gigawatts of capacity are under construction in the U.S. alone. The scale gap is absurd right now.

    That’s why the interesting question isn’t whether orbital data centers replace terrestrial ones soon. They won’t. The question is whether some high-value workloads — especially Earth observation, defense, sovereign compute, and latency-sensitive edge inference — migrate first. If they do, Starcloud doesn’t need to win the whole market to build a very real business.

    Should you take Starcloud’s orbital data center plan seriously?

    Honestly? Yes — but only if you treat it as a long-duration infrastructure bet, not a normal startup growth story.

    Starcloud has already done the part that many deep-tech companies never do: it put hardware in orbit and ran modern AI silicon there. It turned a weird thesis into something concrete. That’s a big deal. So is the fact that Benchmark and EQT were willing to fund it at a $1.1 billion valuation.

    But the company still depends on breakthroughs that aren’t fully under its control. Launch cadence. Launch cost. Space-rated cooling. Multi-satellite synchronization. All of that has to work before space data centers look anything like a mainstream cloud category.

    So the next thing to watch isn’t the valuation. It’s whether Starcloud-2 launches in 2026 and whether Starcloud can turn orbital compute from a technical flex into repeatable revenue.

    Read how Qodo raises $70M for AI code review verification as it builds a system to check and secure AI-written code before deployment

    FAQ

    What is Starcloud’s latest funding round?

    Starcloud’s latest round is a Series A that values the company at $1.1 billion. Benchmark and EQT Ventures led the financing, and the company has now raised $200 million in total after reaching that milestone only 17 months after Y Combinator demo day.

    How do Starcloud’s space data centers actually work? 

    They work by putting GPU-equipped satellites in orbit so data can be processed before it ever comes back to Earth. Starcloud-2 is designed with a GPU cluster, persistent storage, always-on access, and custom thermal and power systems, while Starcloud-1 already proved an H100 could run AI workloads in space.

    Who founded Starcloud?

    Starcloud was founded in 2024 by Philip Johnston, Ezra Feilden, and Adi Oltean in Redmond, Washington. Johnston came from McKinsey satellite work, Feilden from Airbus and Oxford Space Systems, and Oltean from SpaceX and Microsoft’s large-scale compute infrastructure world.

    Is Starcloud in the cloud market or the space market? 

    It’s really both. Starcloud sits in the emerging orbital data center category — part satellite infrastructure, part cloud computing, part edge AI — and its near-term focus is selling in-space compute to spacecraft operators before chasing broader terrestrial workloads.

  • AI Code Review Startup Qodo Raises $70M for Verification

    AI Code Review Startup Qodo Raises $70M for Verification

    Qodo is an AI code review company, and it just raised $70 million to build the software layer that checks whether AI-written code should ship at all. The New York-headquartered startup argues that faster code output has created a new problem: teams are generating more software, but they still don’t trust a lot of it. Founded in 2022 by Itamar Friedman, Qodo is betting that verification — not generation — is where enterprise developer tools get serious next. That’s the idea behind its latest Series B, which brings total funding to $120 million.

    That pitch lands at a good moment. The source article’s own survey figure is blunt: 95% of developers don’t fully trust AI-generated code, yet only 48% consistently review it before committing. So the bottleneck isn’t writing code anymore. It’s making sure the code won’t break production.

    What is Qodo and how does its AI code review work?

    Qodo is a review-first platform. It sits across tools developers already use. These include IDEs, pull requests, CLI flows, and Git-based workflows.The platform adds automated, context-based review. It also adds governance on top.Its product stack includes code review and a Context Engine. This engine helps the system understand multiple repositories. Qodo also includes a governance layer. It enforces rules across teams. It offers developer tools inside IDEs and the CLI. Companies can also deploy it on-prem if they don’t want external infrastructure.

    In practice, teams can install Qodo in their Git provider. They can also use the IDE plugin locally. Review agents analyze code diffs as developers write code. The plugin catches breaking changes and security issues. It suggests fixes in one click. It also flags missing tests before code reaches a pull request. Qodo focuses on shifting review left.

    Who founded Qodo and why build AI code review now?

    The founding story

    Qodo was founded in 2022 by Itamar Friedman, with Dedy Kredo as co-founder and chief product officer. Friedman’s case for starting the company came from a specific belief: code generation and code verification are different jobs. In the source interview, he traced that view back to work on automated hardware verification at Mellanox and later to advances in language-based AI at Alibaba’s Damo Academy.

    That background matters because Friedman wasn’t reacting after ChatGPT went mainstream. He says he started Qodo just months before ChatGPT launched, after deciding that if AI was going to generate a large chunk of the world’s code, somebody had to build the systems that judged whether that code was actually right.

    Why Friedman had founder-market fit

    Friedman isn’t a random SaaS founder chasing an AI wave. Before Qodo, he co-founded Visualead and served as its CTO; Alibaba acquired the company, and Friedman then led teams there building ML-based tools used by millions. He also holds BSc and MSc degrees in electrical engineering from the Technion, with a focus on machine learning and computer vision.

    That mix machine learning, computer vision, infrastructure, plus a prior exit gives him more credibility here than the usual “we added an LLM to pull requests” story. And Friedman’s own framing is sharper than most startup copy: “Code generation companies are largely built around LLMs. But for code quality and governance, LLMs alone aren’t enough.”

    Traction, funding, and the early signals

    This new round is a $70 million Series B led by Qumra Capital, with Maor Ventures, Phoenix Venture Partners, S Ventures, Square Peg, Susa Ventures, TLV Partners, Vine Ventures, Peter Welinder of OpenAI, and Clara Shih of Meta also participating. Total funding now stands at $120 million. Qodo’s earlier Series A, announced in September 2024 when the company still used the CodiumAI name, was $40 million and brought total funding at that time to $50 million.

    Qodo moved from its first review agent in 2023 to a broader enterprise code review and governance platform, and its March 2026 funding post says its enterprise footprint grew 11x over the past year. The company’s extension footprint is also big enough to notice: its about page shows 847.2K installs on the Visual Studio Code marketplace and 615.5K on JetBrains’ plugin marketplace.

    The source article adds the customer proof points investors want to see: Nvidia, Walmart, Red Hat, Intuit, Texas Instruments, Monday.com, and JFrog. It also says Qodo launched Qodo 2.0 in the past month, rolled out tools that learn each organization’s own definition of code quality, and scored 64.3% on Martian’s Code Review Bench — more than 10 points ahead of the next competitor and 25 points ahead of Claude Code Review.

    How Qodo stacks up against rivals

    Qodo isn’t alone here. CodeRabbit is explicitly selling AI code reviews across pull requests, IDEs, and CLI workflows, while Greptile has benchmarked itself against CodeRabbit, Graphite, Copilot, and other review systems. Then you have the code-generation giants Copilot, Claude Code, Cursor, Amazon Q, Tabnine all adding review features because they know enterprises won’t keep buying raw code output without some trust layer.

    Qodo’s angle is clear. It wants to be the dedicated review and governance layer, not a side feature bolted onto a code generator. The company leans on multi-repo context and organization-specific rules. It also leans on on-prem deployment and specialized review agents. The old alternative, of course, is still human review plus static analysis plus test suites stitched together with tribal knowledge. Qodo is trying to turn that messy combo into one system.

    Why are investors betting on AI code review verification?

    This round isn’t really about writing more code. It’s about controlling the blast radius from all the code that’s already being written by models and agents.

    Qodo’s official messaging around the Series B makes that explicit. The company wants to become a system of record for enterprise code governance, and the new money will help expand “shift-left” capabilities, proactive in-development guidance, and precision controls around AI-generated code. That sounds less like a review bot and more like infrastructure for software quality policy.

    That’s also why Qumra and the rest of the syndicate likely bought in. If code generation becomes a commodity, the scarce thing is judgment. Friedman calls that jump from “intelligence” to “artificial wisdom.” It’s a grand phrase, sure. But the commercial idea under it is simple enough: enterprises will pay for tools that reduce review noise, encode internal standards, and keep agent-written code from quietly rotting the codebase.

    How big is the market for AI code review tools?

    The adoption numbers are already loud. GitHub’s 2024 survey of 2,000 people on enterprise software teams across the U.S., Brazil, India, and Germany found that more than 97% had used AI coding tools at work at some point. More than 98% said their organizations had experimented with AI for test case generation, and GitHub also pointed to prior research showing up to a 55% productivity lift for developers using Copilot.

    The next wave is arriving even faster. A January 2026 arXiv study covering 129,134 GitHub projects estimated coding-agent adoption at 15.85% to 22.60% in just the first half of 2025, calling that unusually high for a category only months old. The paper also found agent-assisted commits tend to be larger and skew toward features and bug fixes. Exactly the kind of output that makes verification more important, not less.

    That’s the structural tailwind behind Qodo. Not “AI for coding” in the generic sense. More code. Bigger commits. Faster merges. More places for subtle failure to hide.

    Conclusion

    Qodo’s AI code review bet is blunt: the next big developer tool won’t be the one that writes the most code, but the one that keeps bad code from shipping. That’s a harder product to build, and frankly a less sexy one to market. But if enterprise teams keep piling AI agents into production workflows, verification stops being a nice extra and starts looking like the actual budget line to watch.

    Read how Mistral AI raises $830M to build a Paris data center and expand AI compute infrastructure across Europe

    FAQ

    What funding did Qodo announce?

    Qodo announced a $70 million Series B on March 30, 2026, bringing total funding to $120 million. Qumra Capital led the round, and the company said the money will help it scale its enterprise code review and governance platform.

    How does Qodo’s product work for software teams?

    Qodo runs review workflows in the IDE, in pull requests, and through a CLI so teams can validate code before and after a PR opens. It uses codebase context and ticket context. It also uses PR history to rank findings, suggest fixes, enforce internal rules, and reduce low-value review noise.

    Who founded Qodo?

    Qodo was founded in 2022 by Itamar Friedman, and Dedy Kredo is the company’s co-founder and CPO. Friedman previously co-founded Visualead, which Alibaba acquired, and he studied electrical engineering with a machine learning focus at the Technion.

    Is Qodo an AI coding assistant or an AI code review company?

    It’s much closer to an AI code review and code-governance company than to a pure coding assistant. Qodo competes with review-focused tools like CodeRabbit while also trying to sit alongside code generators such as Copilot, Cursor, Claude Code, and others as the trust layer that decides what should actually merge.