Appknox co-founder and CEO Subho Halder explores the invisible risks of AI-powered mobile apps. He breaks down how the rise of agentic AI and app wrappers has turned software into living systems, rendering traditional security models obsolete.
319 Audio.mp3: Audio automatically transcribed by Sonix
319 Audio.mp3: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.
Subho:
If an app worked, looks good, that was considered a success. But what stood out is how much trust we placed in this mobile application. I mean, today, if you look at it, all your credit card information, all your personal identifiable information are there in your phone. And you are carrying these credentials, payment data, personal identity, and behavioral information everywhere. Software started developing much more faster and it didn't stay still anymore. Release cycle started compressing. It's no more months, it's gone down to weeks and now even days. APIs have multiplied.
Craig:
Too many APIs are in the world. This episode is brought to you by Tasty Trade. On Ion AI, we talk a lot about how artificial intelligence is changing how people analyze information, spot patterns, and make more informed decisions. Markets are no different. The edge increasingly comes from having the right tools, the right data, and the ability to understand risk clearly. That's one of the reasons I like what Tasty Trade is building. With Tasty Trade, you can trade stocks, options, futures, and crypto all in one platform with low commissions, including zero commissions on stocks and crypto, so you keep more of what you earn. The platform is packed with advanced charting tools, backtesting, strategy selection, and risk analysis tools that help you think in probabilities rather than guesses. They've also introduced an AI-powered search feature that can help you discover symbols aligned with your interests, which is a smart way to explore markets more intentionally. For active traders, there are tools like Active Trader Mode, One Click Trading, and Smart Order Tracking. And if you're still learning, Tasty Trade offers dozens of free educational courses, plus live support from their trade desk reps during trading hours. If you're serious about trading in a world increasingly shaped by technology, check out Tasty Trade. Visit Tastytrade.com to start your trading journey today. I'm going to myself. Tasty Trade Inc. is a registered broker dealer and member of Finla, NFA, and SIPC. So, Sumho, can you introduce yourself to listeners?
Subho:
Sure. Thanks, Craig. It's great to be here. My name is Subo Halder, and I'm the co-founder and CEO at Appnox. So I started my career as a security researcher long before AI was a part of everyday conversations. But back then I spent most of my time reverse engineering mobile applications for banks, telecoms, and other big consumer companies out there. Mobile Security at that point was largely ignored. If an app worked, looks good, that was considered a success. But what stood out is how much trust we place in this mobile application. I mean, today if you look at it, all your credit card information, all your personal identifiable information are there in your phone. And you are carrying these credentials, payment data, personal identity, and behavioral information everywhere. But still, the security team kind of treats these applications like a thin client, assuming the real risk actually stays on the server. Right? And that disconnect is what led me to start AppNox. And we started AppNox almost 12 years ago. And the idea was very simple. Security has to move fast. And at the time, most security testing was manual, slow, compliance driven. And they would test an application once or twice a year, generate a report, hope nothing serious has changed between releases. But unfortunately, software started developing much more faster and it didn't stay still anymore. Release cycle started compressing. It's no more months, it's gone down to weeks and now even days. APIs have multiplied. Too many APIs are in the world. A lot of third-party SDKs exploded, and mobile app started becoming more and more like a living system. It's no more like a static product which is there, right? And over the last decade, I watched how the security landscape evolve as well, right? Applications are no longer passive. They learn from user, they keep adapting over time. It is now increasingly behaving as if how things are changing. And from the security perspective, that's a completely new challenge. And historically, if you look at it, security was about protecting code, infrastructure, building perimeter security. But today it is more understanding about the behavior, the intent, the outcome of the system. And what we are no longer asking is, is this code vulnerable? What we are asking is the system doing something it should not be doing, even if the code looks really defined. And that's where many organizations are struggling. Their security models are built for a slower, more predictable world, a world of all static binaries, fixed logic, clear-defined boundaries, but with the advent of AI, where anybody can now build a mobile application. I mean, you can go to any website right now today and build a mobile application free of cost. And it kind of breaks those assumptions once that becomes a reality, which is the reality as of now. When we talk about AI and security today, I don't see it as an incremental improvement. I see it as a category shift. And we are moving from securing software as an object to securing software as a living system. And that's the lens which I want to wear today and have more discussion today. So yeah.
Craig:
Yeah, and the uh with this, actually, I have a son who's working for an AI app. The problem is malicious actors that are uh creating uh apps that do bad things and putting them out there and people uh downloading them on uh uh unsuspecting, or is the problem apps, AI-based apps that have security flaws that bad actors can exploit? Which are you focused on?
Subho:
We are focused on security of the AI applications, which bad actors can focus on. And I'll give you some really basic insight, right? So if you look at any applications in your desktop today, your Chrome, like if you open Child JPT in your Chrome or Perplexity in your Chrome, I mean you can use their desktop application as well. But let's say this application is accessing your location information. Now, Chrome has its own permission system, which kind of pops up saying that, hey, do you really want to say your location? But if you look at applications in mobile, those are a little bit more restrictive. Like while you are installing the application, it kind of tells you these are different permissions that this application requires, and it's just a one-time grant, and you don't care about it once you go ahead and install applications, right? Now, GPT in mobile application does have access to your location, but you had no idea about it unless you actually go inside settings, go inside the application and figure things out. The reason I'm saying this is not because Chat GPT is a problematic app, but the reason I'm saying this is because any actor, any attacker can actually create a malicious Chat GPT wrapper, keep it for free, and say, hey, you don't have to pay a monthly cost, like a monthly subscription for Chat GPT, why don't you use this for free? And you go ahead and download that application from these uh stores, and then you realize that, oh, yeah, I'm able to, you know, uh use the chat epity models, but behind the scene, these applications are actually sending so many sensitive information. And if you compare your desktop with it, your phone holds much more very sensitive information to you. It has your financial records, it has your credit card information, you pay your grocery stores using your phone, you know, using your devices nowadays, with you know, Apple Wallet or Google Wallet. It has all your SSN certification, SSN information, it has your healthcare information and so many more. Right? So now these applications suddenly have an access to such incredible data pertaining to you, right? And just imagine what happens if such application has suddenly access to those data, and a malicious actor can actually steal those data in it. In 2025, starting, we started this thing called finding out, figuring out fake applications in the stores, in the app stores and play stores I'm talking about, not third-party stores. And what we found was we found a couple of applications which are there in the App Store and Play Store, which we helped them to take down those applications, which was not uh a legitimate application. It had the logo of, you know, ChatGPT, it had a logo of WhatsApp, it had a logo of valid brands, and it talked about the same thing, but internally it was leaking so many things internally. Yeah, and we were able to find or figure those applications out. And thankfully, we were able to figure those applications out with the help of AI itself internally, right? Because again, at scale, security at scale is where AI kind of wins the race. Unfortunately, for human at scale, you know, you still need time. Time is the most complex complexity in terms of doing a security testing. So that's kind of happened, that kind of helped us to start validating this idea about figuring out fake applications in the App Store, Play Store, or even in third-party stores and figure things out. So that's where we kind of focus on. Now, the other part of the focus is where we do, we cannot focus on it, is but I'm hoping OEM providers like Google and Apple do, is to educate the users themselves to make sure that they don't go ahead and fall a victim or a prey to actually downloading those applications. So that responsibility lies with the OEM or the vendor, unfortunately. But all we can do is making sure the OEM platforms, the marketplaces are secured enough to make sure those fake applications don't go inside those application stores.
Craig:
Yeah. And uh how does it how do they get in the stores? Because presumably, uh, I mean, I've been through this the app submission process with Apple's App Store, and it's fairly rigorous. Is it because the app reviews are now being done by AI and they just miss this? Or or is somebody that Apple not paying attention? Yeah.
Subho:
I think that's a very good question. So just to understand what is the motivation for these marketplaces like the Pay Store and the App Store to publish applications. Their motivation is to make sure that it does not actually go ahead and affect the users, which means they they look for if the application is not acting or in the classic security world, we call it a virus or a frozen attack. It basically checks for that. It checks if the application is not doing things which it shouldn't be doing. But then these fake applications, which goes into the store, it is a very benign kind of an app. It does not actually affect, it does not actually hack into your account and do a payment for you, but it is more of a data farming kind of an application. So there are a couple of different kinds of fake applications which are there, so which we had defined. One is a simple wrapper kind of an application, which are more towards focused towards earning ad revenues. So I'm gonna, let's say I'm gonna say, hey, change GPT Lite, and I'm gonna try to, you know, put it in a Play Store and an app store or in an alternator store and say that, hey, this is a valid application, it's a benign application. It does give you, it does work. It's just that it's an ad where a lot of ads are there, and just so that if you use it, I'm gonna give it for free, just you need to click on the ads and I'm gonna earn some money out of that. So that's one category of fake applications. The other category of fake applications are applications which actually do data farming. Like it tries to gather as much data as it is from a device. It's like it'll download all the contacts, it'll try to figure out what are the Wi-Fi it is connecting to, and it still seems benign. So even for a Play Store or an app store review process, it feels like, yeah, this app actually requires a camera access to actually click pictures so that you can send it to the AI to analyze the image. But behind the scene, it is like, hey, I'm not only gonna send this image to the AI to analyze it, but I'm also gonna send this image in my own server. I'm gonna just create a profile of whoever has installed my application, right? And the third one is the most dangerous one, and that is the one which most of the Play Store and App Store kind of tries to look for, where these applications actually is attacking the users who are using it, right? They're trying to actually circumvent the protection the OEM has placed into the devices, and those are the applications which get flagged more often than not. If you look at some of the data, last year 2024, and it has nothing to do with AI, the most fake application from a category is a banking application, right? These are all applications which gives you loans, which kind of, you know, you download this application, you can get a quick$10,000 loan just before you get your salary paid. And these applications have access to your SMSs, has access to your phone, phone book, and address book and everything. The reason they state the reason is because I want to build a credit score, how much of the loan can I give to those people? But just imagine if I come up with another application mimicking a similar behavior, but I'm not gonna use those data to actually give you a loan, but I'm gonna do it for something completely different. That's where Play Store and App Store Review process kind of fails in there. And I mean, if you look at the data, every year almost almost 200,000 to 200 to 250,000 applications are taken down from both app stores and Play Store for a reason. Almost 50 to 60 percent of the reason is it is violating the safety norms for those devices. And that's why they have been taken down.
Craig:
Yeah. And the the people putting those apps up, they're harvesting data. Is that is what's their business model? Then they just sell that data to data brokers, or is there some other use?
Subho:
Absolutely. So these data are being sold in data brokers, these data are being sold in dark web as well. So again, as I mentioned, there are three kinds of data applications, right? You have the adware application that is completely revenue model where I want to ride the hype of whatever the new launch which has happened to make sure as many installs can happen and I can earn some revenue out of that. So that's the number one kind of application. Second is the data farming application. Now, these applications, they go ahead and sell the data to data brokers, and they kind of try to sell it to competitors. Like, I'm sure if OpenAI has launched something today, then Anthropic is also trying to look for getting the user base of OpenAI to come and install their application, right? So there will be data brokers who would be like, hey, you know, I know how many people have used this, why don't you use it, you know, use kind of this. Uh or things like even these data are also sold to banks and financial institutions to target people who has better, who has a lot more spending capacity and power so that you can target more ads and you can reach out to them and have a lot of cold calling in progress to do it. So that's the second type. And the third type, as I talked about, those are very strategic kind of attacking the user itself. It's far more like a malware or a virus in the cybersecurity domain, that's what we call. And that that is a very malicious intent that has nothing to do with earning money or anything but to do damage and harm to the user.
Craig:
Yeah, yeah. And uh, these are apps that are engineered for those purposes, but what about? I mean, AI is being embedded into everything, all apps now, and then apps are starting to include agents, intelligent agents that adapt and evolve as you use them. Um, so what how much of the problem is, for example, I build an app, I have uh API call, it's an A, it's a quote unquote AI app. How much of the problem is just I'm not careful in my security, and that becomes a vector for bad actors to reach the app users?
Subho:
That's a great question, actually. So the how we should look at it is two perspective to this. It is AI for security and security for AI. That's what I keep talking to people as well, right? So AI for security is how do we leverage AI to make sure the security of the applications are okay? And the other way around is how secure is the AI itself in terms of you know acting on those devices? What we have seen is in terms of security for AI, whenever we are embedding these applications, embedding these AI models into these applications, what we have seen is it requires a lot of testing in these models to make sure that these agents do not go haywire and do not start collecting data, which it should not be collecting data on, and things like that. And we have seen a lot of instances where, I mean, recently this came out, and it has nothing to do with mobile, but this has come out that cloud code, like I love cloud code, right? So Cloud Code has started, I mean, there was an incident where Cloud Code went ahead and started deleting sensitive directories, right? How do you prevent agents to actually stop doing dangerous actions on behalf of the users, right? And that is a very important thing. And in the mobile realm, this is all the more important because now you have this set of very sensitive data, and then you have AI which has access to these very sensitive data. How do you put in guardrails in the model and the AI to make sure that, you know, even if they have access to the data, they know exactly how to access it, or you should not access those data. I mean, whatever the guardrails are. So are those guardrails good enough to make sure that doesn't happen? The other way around is actually using AI for security. A decade ago, in cybersecurity world, there was a term called script keys. Script keys are nothing but who are entering into the cybersecurity world and they see some script by which it can hack into a couple of things. They just run the script, they don't understand anything, and it kind of tries to get into the system, and they are like in hacking or in a in a vulgar word, we say noobs in security, people who kind of do that. And now that has changed completely, right? You can use this EI models, you can use plot code to actually go ahead and uh try to hack into the system. You can instruct a plot code agent to go ahead and say, hey, Cloud, can you just observe this Android application, decombile this application, and figure out if there are issues and things like that. The power of AI is the reasoning cycle, the how it reasons out the situations, right? And that changes everything. So now these script keys who are also called noobs during our time are no more script key now because now they are more like a prompt engineer or a security prompt engineer. Now, all they have to do is make sure they instruct the AI model to do something which is very offensive and which goes ahead and does it on behalf of them. And that kind of changes because the attacking part now has become so easy, the barrier to entry is so less that the defending part has become more and more problematic. That's why we come in as well. Like at Hapnox, we are the defending side of it, right? We have to cover all the bases, right? Even one single point of failure is a problematic. For attackers, they just need to find that one single point of failure. They don't need to do everything. So for a defending kind of a situation, it becomes very difficult to use the old methodology which we used to do it in the cybersecurity world. It's no longer those algorithmic way of figuring vulnerabilities out, but it's more about using the AI, the reasoning cycles to actually block these attacks. Happening via attack. I'm sure that this world is not going to be too far where we're going to see two AI agents trying to, you know, fight against each other, one trying to defend the other trying to attack. And that's going to be a reality pretty soon. So, yeah, that's what my view is with this. Yeah.
Craig:
Yeah. Actually, that's fascinating, the idea of AI agents sort of fighting a defender and an offender. How is trust becoming a measurable performance indicator for AI systems? I mean, uh what will companies be expected to prove to users to build trust?
Subho:
I think one of the biggest problems with AI is it's like a black box. You feed in some information, it gives you some information out. And what people are really worried about is I really don't want to give my personal data to AI because I'm not sure how this data is being processed internally, is it being saved somewhere, and how is it being processed and giving me an output? And that's where the word trust comes into play. Trust is very important in an AI world, right? So there's this very thin line where, you know, we are a trust implicit society, which means by default we try to trust something, but if the trust is broken, that's it. Then we become a trust explicit society. Then we are like, we're not gonna trust you. For show me some proof, and then only I'm gonna go ahead and do it. And that's the kind of a society which we live in in today's world. Unfortunately, there have been certain instances in the past with AI models leaking information or getting into datas, which we should not be allowed to have access to in that kind of build, that kind of build, this behavior that, hey, I do not trust AI models. So it becomes a very big problem for companies like us who are utilizing AI to power or to harness the fending part of it, to actually build trust for the users. And how we build it is actually by providing transparency. And I had talked about it previously as well. Transparency is the key to building trust. If I transparently tell the users, this is how I process your data, this is how the AI processes your data, and this is what the outcome of the data processes is where you start building trust. That's the initial first step towards building trust, right? Incidents like, if you remember, in the start of 2025, incidents like DeepSeq, you know, getting all the data into some servers who people were not aware of. A lot of news came up because of that. A lot of places, people stopped using DeepSeq altogether. That kind of show showcases that how important trust is, right? Somehow we do trust Anthropic and OpenAI to make sure that our data is there because of data residency, data privacy, and things like that. Okay, my data is not going out of the country, it's still there, which is great. But whenever the user gets spooked if the data goes to some other places or if the data is being mishandled or misused. So trust is a very important thing in terms in the AI realm, and it is very important for companies, um, not only like us, but companies who are in the AI space to actually build that transparency in terms of how the data is being handled and how AI models are using these data internally. Yeah.
Craig:
Yeah, actually that's interesting how not only because of AI, but certainly AI plays a big part of it. That trust is eroding in and certainly in US society, but globally, uh, trust in technology, trust in media, trust in governments, trust, you know, and this is the big part of it. So, how do companies start evaluating not just who's accessing their app, but what their intent is?
Subho:
So it all depends on how the data is being processed and what is the intent of the data which is getting processed. It starts with that. If a company is actually utilizing your personal data to predict or to do something, for example, let's say I'm a healthcare company and I really need access to your healthcare cost to actually predict something or to help you something, that is something which is the Nobel cause, right? And I should be given a control about what are the data which I should be sharing and what are the data which I should not be sharing, right? So companies should be giving those kind of controls to the end user and should also convey what are the data needed to actually make a proper work or the use case of it, right? Until unless I know the use case of it, I may not be trusting the system to give those data out. I understand like for a healthcare, it is a big problem in a healthcare kind of an industry because they need access to all your healthcare records to actually give you a proper treatment plan. And the only way that can happen is if you can share this data with the company. Now I'm not talking about AI, I'm just talking in a general mindset. And then there are certain controls as well. So there are things like data scrubbing. Sometimes you need to share data, but that does not need to be involved in terms of linkages, right? So there are things like data lineages and data linkages. So for example, you can share your health records, but that does not need to be identified that Craig's health record is this. It can be like a person X, and this is the health record of person X, right? And that is enough for a model or for the company to actually predict or give you a proper treatment plan, depending on what of the data it is. It is easier to explain it in a healthcare realm, but you can extrapolate from the healthcare realm to other realms as well, like your payment processing, your consumer brands, your e-commerce, everything. You can just uh just extrapolate it to other such industries as well. So it is very important for users to understand why they need the data, how much data needs to be shared, and what is the minimum data which needs to have to be shared so that you know they get the benefit out of it at the end of the day.
Craig:
Yeah. You know, the that the the problem is you mentioned is educating the public because GDPR is as I see it as been largely a failure, because in order to comply, you know, companies put up this little you know accept button, and nobody has time to read through. I mean, the fine print, you need a lawyer to go through it, and so they just accept. And with this, as this trust issue grows, how do you educate people or companies as to how they they should handle this stuff? And also, you were saying how you know we all uh not all, but we trust open AI and anthropic because uh the sort of core researchers are at those companies, they've raised a lot of money from reputable VCs and investors, but I don't know what open AI is doing, and I use it every day, you know. For all I know, they're you know, and we've all had that experience of having a conversation with our wives about or spouses, I should say, with about you know, going on a vacation to the Bahamas, and next thing you know, we're getting fed ads about vacations to the Bahamas, and it's like the go-to is that my phone is listening to me. I don't know, maybe it is. I always tell my wife no, it's these systems gather so much data they can triangulate you know, based on something, search that somebody in the family did, but but it does make you uneasy. How how do companies build that trust and how do how should consumers react to these apps? Yeah.
Subho:
I think this is a great question, and the thing is trust is uh kind of a mix between the law, the government, and the company, right? It has always been a mix of all the three because just as a like I run App Talks and I can just tell you in a blanket statement, I don't store anything in my database, right? Why will you trust me if I say that? You will only trust me if there are certain processes in place. So now I try to back it up by saying that, hey, I have SOC2 type 2 as a certification, which kind of puts in a process or a control, which kind of tells me that even if I hold some sensitive data in my databases, I still have some access to control to it. But still you will be like, that's okay. But I still, I mean, there's a little bit of build of trust which happens there, but then still it's not absolutely trustworthy. And then the third part is the law of the land, right? How stringent the law of the land is. And you talked about JDPR being not a very strong privacy forum. And I agree with you, like those accept cookies which comes in. Now, our brain always is programmed to click on accept, but have you tried clicking on decline? The website still works normally, right? It is just a consent. It doesn't matter whether it's an accept or a decline. Unfortunately, that's the truth in today's world, right? If a company is actually building on trust, if you actually click on decline, it maybe some of the videos, let's say a YouTube video is there to showcase how the product works. But if you click on decline, that YouTube video should not be loading because otherwise you are consenting that this, your data will be shared to YouTube, and you do not consent to that. It's a very simple way of figuring out whether they have actually implemented it properly or not. Unfortunately, in today's world, all we need to see is just a disclaimer message and that's it. Uh, but still it builds a little bit of trust. And a third thing is the government. Uh, I'm happy about the US government, they do have a Congress where they, if there are issues which happen, they do call the companies out, they do hold them accountable. I think that is a really good thing. That kind of builds public trust, right? So the question over here is why did DeepSeek, which is a Chinese-based company, the servers are in China, versus OpenAI, why do I trust OpenAI more than DeepSeek? The reason is I know that today, if something terrible happens with my data, unfortunately, if something terrible happens only with my data, maybe there's nothing I can do about it. But let's say a terrible thing happens to the whole country with a lot of data, there's a mass problem which happened. The government is there to call them up and you know they can ask them questions. But what happens if DeepSeek has the same kind of a problem? Can the US government go and ask them, hey, can you come here and answer this question? That's not gonna happen. And will the Chinese government do the same thing over there in China, saying that, hey, why is this happening? People do not trust that. So it has nothing to do with whether the US government is better or China government is better, but it is about what are the processes, how do people perceive these processes are in in a broad daylight, right? And these are all psychological matters. Unfortunately, this has nothing to do with tech, this is everything to do with psychology. That's why we trust OpenAI or anthropic more because of the law of the land, because of the government, because we know that if tomorrow something terrible happens, this at least they need to answer to the government of the United States. Unfortunately, if it is not in the United States, if it's in some other government, some other country where some other government is there, we are not sure whether they're gonna do the same thing or not. And that's why a call which I want to make is I think this is for each and every country, for each and every government in each and every country, they should have those kind of laws of land and have those kind of processes where they can call a company up and ask them why or how the user data is processed. And that kind of builds a lot of trust. So that's why it's a shared responsibility, unfortunately. It's just not what the company can do. Yeah.
Craig:
Yeah. You you did a report recently about the security of retail apps, and in the recent holiday season, there's, you know, everyone's buying. I mean, what did you find? What are some of the surprising findings from that?
Subho:
One thing, so the reason why we did the report is very interesting, is because uh there was a Black Friday sale which started happening, and I'm like, I need to, you know, buy some gadgets for myself. And then I realized there's so many people who are actually utilizing applications to actually buy, order things, or even pre-book things in Black Friday. And that kind of led me to think about hey, it's just not me. Like all my friends are all around, they're all using this Amazon and other applications to pre-book or to even book things or to order a couple of things. So that's why we thought, okay, and now it's the holiday season, it's Christmas time, I'm gonna buy a couple of things for my family and all of this. I'll use my mobile app to do it. And that's the behavior of the consumer. So then the idea kind of came to my mind like, hey, are these apps even safe to use? What are what is it hiding behind the scenes? And that's why we started actually, you know, collecting these applications, these retail applications, to figure out what are the different things which we have found, which we can find in these applications. And we found a couple of issues in these applications for sure, which we have written in the report. We found applications not following secured communication between the server and the mobile apps, which means that there is a possibility of a man-in-the-middle attack. A man-in-the-middle attack is nothing but there is a connection. And if I'm a man who can sit between the two connections and I can see what is flowing, that's what a man-in-the-middle attack kind of looks like. And all of these applications, most of these applications are vulnerable to those kind of attacks. There are a couple of applications where hard-coded API keys are inside, which kind of again falls back into the realm of creating fake shopping applications, cloned applications using those API keys to call those APIs behind the scenes. So now that became a really big threat vector. That's what the report kind of talks about. In in brief, that's what I'm talking about. But that's what the report kind of talks about. Like the state of what these mobile applications is, the retail applications are.
Craig:
Yeah. And so App Knox, I mean, you do more than study this stuff. You have products. So what are App Knox? And as we said before we started, the Nox comes from Fort Knox. What does App Nox do? I mean, what are the products that you offer? And are and is your market uh companies that are building apps, or is your market consumers that are using apps?
Subho:
Our market is companies who are building these applications. And as you rightly said, we we came up with this name, App Nox, from the story Fort Nox, but you're not building Forts, we are building applications, and how do we make sure that these applications are secured? So that's where the name comes from. And we have a couple of products. So our fundamental product is actually about figuring security bugs in mobile applications, and we are focused only on mobile because that's what our talent is, and that's what we are expert on. So we don't want to go into other markets, we don't want to compete against other very big vendors which are there. We are focused on mobile applications, and we try to be the best in that market. We try to do that. So the first product is about figuring security issues, bugs in the application. That's the first one. The second one, which is very important, is we call it Store Nox, where we observe these Play Store and apps. That's where I come up with all of this report from the second product. What it does is it basically observes the app store, Play Store, and third-party store. We observe around 30 plus stores across the world, and we try to see what are the different applications which are getting uploaded in these stores. And we try to say that if this application has been scanned by app dox or not, that's number one. Number two, we try to figure out if there are any fake applications which are getting published in the store, and we help brands and consumer brands and companies to take those applications down as well, once we can figure out if there are. And a third thing which we do, which we which we do is a kind of an AI pen testing module where we try to utilize the reasoning capability of the AI to actually perform pen testing on top of these applications. Rather than a human doing the pen testing, we're trying to make AI do the pen testing of these applications, which is, again, as I started, like it's no more manual, it's much more faster. And we try to figure out what are how these applications can be hacked, and we kind of help companies to get those data and to make sure that they fix the application before it actually gets hacked. So, yeah, these are some of the things which we keep doing at AppTox. Yeah.
Craig:
Yeah. You have another report on developer burnout and it's related because as AI starts generating more code, it affects the secure development practices and debugging workflows. Can you talk about? I found it fascinating because one of the reasons for burnout is although AI things like cloud code speed up development, it also puts pressure on the developers. It's like, you know, you used to be able to do one thing over a period of time. Now you're expected to do five things because, hey, you have cloud code. Yeah. Can you talk a little bit about that and how is that affecting the security of the code written? I think that's a great question.
Subho:
So so yeah, so we actually worked on this report because we wanted to see there are a lot of companies who are coming up with like prompt-based engineering. Like you write a prompt, hey, can you create a website for me or a mobile app for me? It kind of immediately within a matter of like five to ten minutes, creates a code, give you an application which is looking really pretty, kind of works, and you're like, wow, I mean, I could have spent at least a week to develop this now. It's five minutes and I get a code. But we need to ask this question is that code secured? Is that code secured enough? Is it handling business logic properly? Has there guardrails and checks and balances inside these applications, which kind of make sure that your data is not getting leaked anywhere? Unfortunately, that's not the goal of an EI writing a piece of code, right? You as a user, we define our goal that hey, I need this application looking, you know. Really good, and this is the operational guideline for this application. But unfortunately, we do not put in security best practices in the prompt part of it. And in that kind of extrapolates to developers now they are able to turn out code like really fast. And this is a real life example, which I'm talking about at Appnox, right? We had an AI agent sitting which checks if there is any error in the APIs, which happens. It immediately tries to figure out where the error is in a code base in GitHub and sends out a pull request to fix that error in the API. Now, not all the pull requests needs to be merged because sometimes these errors are like expected errors. But now what happened is for developers to now go ahead and check the code, whether the code written by the EI is secure, that's number one. Number two, does it actually solve the purpose of the error which this code is trying to write? And number three is to validate this code even to put it into a QA system to understand whether this actually works or not. That took a lot of time. So now, just to give you a kind of a hindsight how it works. Let's say there is an error which is a valid error. We immediately raise a Jira ticket and then we assign it to a developer, and then the developer takes one day to understand what this error is, takes another day to actually fix it, sends it to a review. The reviewer goes to the code, it's pretty easy because it's a human. If they don't understand the code, they call up the developer, like, hey, I don't understand what you have written. Let's get on a call, it kind of they kind of have a chat, half an hour review is done, then you go to QA, everything looks good, and then that's it. You push it to production. Overall turnaround time is two to three days. Now let's talk about EI. The speed at which the code got generated is a minute, right? It took a minute. That's it. But now I go into looking at a piece of code, understanding why this code got generated, try to look at the root cause of it. Now I don't understand the piece of the what is the root cause of why this got generated. So that took me half an hour, one hour to understand. And then I give it to a reviewer, and the PI reviewer is like, why did you write this piece of code? I do not have anybody to talk about it now. Now, the reviewer is now gonna use another AI model or like a plot code to actually understand the piece of code, which kind of confuses them more. And then they're like, oh my god, I don't understand this. And then the engineering manager is gonna come and say, Hey, this piece of code was written in five minutes, it's been a day, why is it not in the production? And then the QA tries to test it, they still have no idea what this code does, right? Somehow there's a senior engineer who has to get involved, and now they understand, oh, this code, there's something mistake, and I'm gonna modify it. They use the AI again to modify it, and now it goes to production PR and then goes to production. Can you see where the developer fatigue comes in? So developer fatigue has shifted from writing the piece of code to actually reviewing it and putting it before the production. It hasn't solved anything. It's just shifting the problem from the source to a later part of the source. That's what happened here. And this is a real life problem. And that's why we decided that, hey, can we do a kind of a survey and understand is it, are we making something wrong? Is it a problem at Applox or is it a developer-wide problem? That's what the report kind of talks about. Like, what about the core? Like, obviously, we are able to come up with prototypes and MVPs like real quick, real fast right now and then. But are those products easily deployable in production? Are those secured enough? Do I understand the piece of code which the EI has written? Right? Do I have the intricate knowledge about what it is, why is it written? So all of this now gets shifted towards a later part of the SDLC cycle, the development journey, and that is where the developer burnout becomes real. So that's the part of the report which we have written over there. Yeah.
Craig:
Yeah. And yeah, so the problem is the more AI takes over code writing and then reviewing of code, and the less involved humans are, you lose touch with what the code is doing, and there may be security vulnerabilities that a human would catch that the AI, because it's been trained on past vulnerabilities, not new ones, may not recognize. What is penetration testing?
Subho:
Yeah, penetration testing is a methodology by which a pen testing person tries to penetrate inside the application. So, for example, if you have a web application, let's say let's say it is a shopping application. Uh, if I'm a pen testing guy, so penetration testing guy, I try to penetrate inside your application and go towards places, pivot towards places where I am not supposed to go. For example, if I am able to get access to the admin panel, or if I'm able to get access to the database, those are the places which as a normal user I should not be able to reach to. But as a penetration tester, I am able to reach to via a certain path or via exploiting certain vulnerabilities. And that is what a penetration testing is. An insort piece of pen testing. Yeah.
Craig:
Yeah. And is there a gap between what an AI system can do in penetration testing and what a human can do?
Subho:
To be very honest, before AI, there was no way to actually penetrate do a penetration testing. Because for you to do a penetration testing, you need human. And the human is the most important thing because I, as a human, I can reason in my brain. I try to reason how do I bypass some of the way in the application to actually reach to the database or to the admin panel, right? Which an algorithmic way of doing is not possible. Like algorithm has a defined set of targets. It knows that this is the pattern and this is how the vulnerability is, right? But with AI coming into the picture, what we what a penetration testing using AI has is the reasoning cycle. How AI reasons itself. Now that is not a predefined reason. So AI can be and it has its pros and cons as well, right? Sometimes it can reason properly, and that's what we need as a part of penetration testing. Sometimes it will reason some improperly, and that's what we call hallucination. So the improper reasoning, if somehow we can filter out the improper reasoning, and if we focus on the reasoning which it has done in a proper way, that's how we can automate penetration testing. And in today's world, a lot of companies are coming up with, you know, use utilizing AI to do a penetration testing, and that has become a possibility in today's world. But I can automate a part of the reasoning cycle of a human in a model where we can train the model or we can use any foundational model like Claude or Gemini to reason out what is the next step of action it needs to do. And that's how our penetration testing is becoming far more and more automated nowadays. Is it equivalent to what a human pen testers are? I don't think it is there. Maybe it is at 1%. But I'm happy it is at 1% at least, right? It's better than zero, right? But from 1 to 100%, it won't take time. I think, I think the time horizon which I'm looking at is in the next two to three years, we will have AI agents which are actually doing much better than what human pen testers are. But that's just a kind of a visionary statement, which I would give. Hopefully, human levels up, and that's how the whole game is over here.
Craig:
Yeah, yeah. Looking forward, what concerns you? I mean, that's a positive development that AI can will be able to do penetration testing. But, you know, as agents in particular grow, what's the concern that you see in the future? And what are you guys working on next?
Subho:
So we are working on the same thing. We are trying to build that brain of a penetration tester, right? And we're trying to train this model towards thinking in terms of how to go ahead and reason out your way towards the end goal, the end objective, which might be getting access to the database or getting access to the admin panel and all things like that. So there are certain things. That's what we are working on. The future of this is amazing because if you if you look at humans, us, right, there's a problem right now where people start by saying, what will happen to my job if AI starts doing everything, right? Because I'm trying to uh I'm trying to commoditize what a human does into an automation, right? And then the question comes, what happens to my job? But I think it is a good thing. It is not that your job is at risk, but you are going to solve the next order problem. It's a higher order problem. Today we don't care. We know that computers work on a binary one-zero and that's what it is. But we don't really care about what is exactly happening in the microprocessor on or on the GPU. What is the one-zero thing happening? We are working on a higher order problem where we're building softwares on top of that. Now we don't care even about building softwares. Now we are working on prompt engineering. We're trying to build prompts. Somehow, this prompt, if I give this as an input in the prompt, this is the output which comes out. But Adhiana fundamentally everything is running on ones and zeros, right? But we are solving higher order problems. My point is penetration testing will become commoditized, will become automated. But then humans need to do a lot more higher order problems. We need to then focus more on zero-days attack, like attack which even EI won't be able to figure it out because it's so complex. And that's how the whole human and EI conversation will keep happening. AI is not gonna come and eat your job up. It's like you have to level up to the next higher level, higher order problem. That's what you have to work on. So that's what I feel, that's what I believe. And that's what we are working towards as well. We are trying to build AI pen testing product. And the difference between us in AI pen testing product is we're working on top of binary, right? So the thing is, there are uh companies like you know who are into AI pen testing, they work on APIs, and API is human describable language. Like if you look at the URL, you would know. For example, if you look at Apnox.com, you know that there's a company called Apnox.com, and that's the URL. Yeah. But a binary does not have those, you know, English language meaning. Binaries once in zeros, it's registries, and what we are working on is translating those binaries in a way that AI model can understand those binary and then do a pen testing on top of those binary. And that's the differentiation which we are trying to work on. And hopefully we'll be able to launch it in January 2026. Yeah.
Craig:
Okay. This episode is brought to you by Tasty Trade. On IONAI, we talk a lot about how artificial intelligence is changing how people analyze information, spot patterns, and make more informed decisions. Markets are no different. The edge increasingly comes from having the right tools, the right data, and the ability to understand risk clearly. That's one of the reasons I like what Tasty Trade is building. With Tasty Trade, you can trade stocks, options, futures, and crypto all in one platform with low commissions, including zero commissions on stocks and crypto, so you keep more of what you earn. The platform is packed with advanced charting tools, backtesting, strategy selection, and risk analysis tools that help you think in probabilities rather than guesses. They've also introduced an AI-powered search feature that can help you discover symbols aligned with your interests, which is a smart way to explore markets more intentionally. For active traders, there are tools like Active Trader Mode, One Click Trading, and Smart Order Tracking. And if you're still learning, Tasty Trade offers dozens of free educational courses, plus live support from their trade desk reps during trading hours. If you're serious about trading in a world increasingly shaped by technology, check out Tasty Trade. Visit Tastytrade.com to start your trading journey today. I'm going to myself.
Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.
Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.
Sonix has many features that you'd love including automated translation, advanced search, collaboration tools, powerful integrations and APIs, and easily transcribe your Zoom meetings. Try Sonix for free today.