Loading stock data...
Media 53398671 8c31 43f5 9cd4 d809d5b23986 133807079768874300

DeepSeek iOS app sends unencrypted data to ByteDance-controlled servers as ATS is disabled

A recent security audit uncovers a troubling set of transmission and encryption flaws in a native iOS AI chatbot, DeepSeek, raising questions about data privacy, cross-border data flows, and the security posture of a high-profile AI startup. The app reportedly sends sensitive user data over unencrypted channels, routes data through infrastructure controlled by ByteDance, and relies on deprecated encryption techniques alongside hardcoded keys. These findings come amid broader scrutiny of AI assistants and the ecosystems behind their hosting platforms, prompting calls for immediate action by policymakers and security professionals alike. The combination of insecure data transport, potential exposure to third-party systems, and governance gaps underscores an urgent need to reassess how consumer-facing AI tools handle personal information, especially when hosted or supported by entities with cross-border ownership or influence.

DeepSeek security findings and context

DeepSeek, a China-based AI startup, released an open-source AI chatbot that garnered attention for its simulated reasoning capabilities, which many observers noted were competitive with leading models in the market. Shortly after its launch, the app rapidly climbed to the top of the iPhone App Store’s Free Apps category, signaling a strong consumer interest in accessible AI assistants and the rapid adoption of new AI technologies. However, the subsequent security assessment by a mobile-focused vulnerability firm revealed a cascade of security issues that go beyond normal software quirkiness or development-stage imperfections.

The audit identified that the app communicates data across networks in ways that expose sensitive user information to potential interception. In particular, data transmission was found to occur over channels that were not encrypted, exposing information to anyone who could monitor or analyze the traffic. The security firm highlighted that modern mobile apps are expected to enforce encryption for data sent over the internet, a standard practice widely promoted by platform holders. The absence of this protective layer raises the risk of passive eavesdropping, tampering by intermediaries, and broader privacy violations that could affect individual users and organizations deploying the app in professional environments.

Beyond the encryption gap, the audit raised concerns about how the data is processed after it leaves the device. While some data may be encrypted in transit, the app reportedly sends data to servers operated or controlled by ByteDance, the parent company of TikTok. Once decrypted on the server side, the data could potentially be cross-referenced with other datasets to identify specific users and their behaviors, including queries and usage patterns. The implications are significant because they touch on who has access to personal information, how it might be used, and whether appropriate safeguards are in place to limit access and abuse.

The findings also drew attention to the app’s use of an open-weight, simulated-reasoning model, which the evaluators noted has performance benchmarks that closely resemble certain capabilities of leading models. This raised questions about the model’s security posture in addition to its performance, since sophisticated AI systems can become vectors for privacy leakage or misuse if not properly designed and safeguarded. The audit underscored that the overall security story is not only about the machine-learning model itself but also about the surrounding infrastructure, data handling practices, and governance controls that shape how sensitive data is stored, transmitted, and accessed.

As the audit progressed, other concerning behaviors emerged. The testing team flagged that the app relies on 3DES, an encryption scheme that was deprecated due to acknowledged weaknesses discovered in earlier years. The deprecation followed research that demonstrated practical attacks could exploit 3DES to decrypt traffic in certain conditions. The presence of 3DES in the app’s cryptographic toolkit is especially troubling given the sensitive nature of the data involved and the potential for adversaries to exploit any weaknesses in the encryption chain.

In addition, the audit found that the symmetric keys used for the 3DES scheme are identical across all iOS users and were hardcoded within the app. This kind of key management flaw means that compromising a single key or obtaining the key from the app instance could provide access to data across many users, dramatically amplifying the risk of data exposure. The combination of a deprecated encryption scheme with hardcoded keys represents a longstanding and well-documented security anti-pattern that has been warned against by security professionals for years.

Industry observers who reviewed the findings emphasized that these practices indicate a broader deficiency in basic security protections for user data. The co-founder of the auditing firm noted that the app appears not to be implementing essential security protections in a reliable manner, suggesting either deliberate decisions or substantial development gaps. The auditor also stressed that more questions remain as the assessment continues, but the current findings already signal material risks to user privacy and corporate data integrity.

The audit recommended concrete steps to mitigate these risks, including removing the app from environments where sensitive data is processed, whether in managed devices or BYOD (bring-your-own-device) deployments. The rationale centers on privacy and security implications, such as insecure data transmission, hardcoded cryptographic keys, and data sharing with third parties like ByteDance. There is particular concern about data analysis and storage in regions where legal frameworks and governance practices may differ from users’ expectations or the hosting ecosystem’s commitments. The acoustic of recommendations also suggested evaluating the Android version of the app, which the auditors described as even less secure than the iOS counterpart, and advising similar removal if necessary.

In general, the audit highlighted a pattern of patterns and behaviors that regulators and security-minded organizations would view as high-risk: insecure data channels, cross-border data movement to foreign-controlled infrastructure, and limited transparency about how data is used, stored, and shared. The findings call into question whether the app adheres to platform-enforced security standards and whether it does enough to protect user data in transit and at rest.

Data transmission and encryption: what’s happening and why it’s risky

A central pillar of the concern centers on how the app transmits data during the user onboarding and initial configuration phases. During registration, the app reportedly transmits a range of data points in the clear, including the organization identifier, the specific version of the mobile software development kit used to build the app, the user’s device operating system version, and the language preference configured by the user. The presence of such data in unencrypted form at first contact means that attackers observing network traffic could readily capture these identifiers, which could later be correlated with more extensive datasets to profile users or organizations without consent or proper authorization.

In practice, transparent onboarding data collection can be legitimate if adequately protected, but the combination of unencrypted transmission with cross-border data flows introduces a unique risk vector. Attackers could capture the raw onboarding payloads and use them to map out an enterprise’s technology stack or to fingerprint devices and configurations, creating a baseline for targeted attacks. The risk compounds if the same data, coupled with subsequent query traffic and usage logs, isever transmitted over the same insecure channels or stored in locations subject to local or foreign access requests.

The data path described in the findings suggests a hybrid architecture where part of the data travels through infrastructure controlled by ByteDance affiliates. When such infrastructure hosts or assists in processing data that originated from users, there is a possibility for cross-referencing with other user data, either within ByteDance’s ecosystem or in related services. The outcome could be a more complete picture of user behavior patterns, enabling more granular profiling or marketing analytics, and in the worst case, surveillance considerations that raise concerns about the privacy rights of individuals.

The report also touched on where the data is ultimately stored and processed. The company’s stated privacy policy indicates a willingness to access, preserve, and share collected information with law enforcement or other authorities when necessary to comply with legal processes or government requests. While the policy would not be unusual in certain contexts, the combination of this policy with data being stored on servers in a jurisdiction with distinct data sovereignty laws can generate a perception of reduced privacy protections for users. In practice, the data handling policy can influence how data is routed, who has access, and under what circumstances information may be disclosed to third parties, including governmental authorities.

Additionally, the audit noted that some data relies on TLS for certain layers of protection, but the decryption and subsequent data handling occur in a context where the data could be cross-referenced with other datasets. The implication is that while some transport-layer security might exist on certain segments of the data flow, the overall data lifecycle—from onboarding through normal usage—may include stages where sensitive information could be exposed or combined with other datasets in ways that undermine privacy guarantees.

From a risk-management perspective, these transmission patterns call for a rigorous review of data-minimization principles, secure-by-design practices, and explicit data-handling policies that align with consumer expectations and regulatory frameworks. The absence of consistent encryption during initial data transmission, combined with cross-border routing and potential data sharing with a foreign-owned entity, represents a combination of risk factors that many security teams would categorize as unacceptable in modern mobile applications, particularly those targeting broad consumer adoption in the AI space.

Encryption standards and key management: 3DES and hardcoded keys

Among the technical findings, the use of a symmetric encryption scheme based on 3DES (Triple DES) stands out as a major vulnerability. 3DES has been deprecated by major standards bodies due to well-documented weaknesses that could be exploited to decrypt traffic under practical conditions. The continued reliance on this algorithm raises questions about the long-term security guarantees the app can provide. In a modern security stack, organizations typically transition to stronger, more robust algorithms such as AES (Advanced Encryption Standard) with adequately sized keys and proper modes of operation to resist modern cryptanalytic techniques and side-channel observations.

Even more alarming is the fact that the symmetric keys used for the 3DES scheme are identical across all iOS users and are hardcoded directly into the app. The implications here are severe: if an attacker obtains the app binary, reverse-engineers the code, or captures the hardcoded keys through other means, they can potentially decrypt traffic, access personal data, and reconstruct user behavior across a broad user base. A single compromised key has outsized impact when it governs cross-user decryption across the entire installation universe.

Hardcoded keys are widely known as a critical anti-pattern in secure software development. Proper key management would involve deriving or obtaining per-user encryption keys, using secure storage mechanisms within the device, and implementing secure key distribution practices that do not expose master keys to the risk of extraction. In this case, the combination of hardcoded keys with 3DES—an outdated algorithm—creates a confluence of systemic weaknesses that security professionals consider unacceptable for consumer-grade applications, especially those handling sensitive information.

The practical consequence of these encryption issues is that even if a portion of traffic is encrypted in transit, the presence of weak cryptography and insecure key management creates an unreliable security envelope around user data. Attackers who manage to monitor network traffic or access the app’s internal assets could potentially reconstruct sensitive data, correlate it with other data sources, and derive actionable intelligence about users or organizations. The risk horizon extends beyond individual users to enterprise environments that rely on the app for business operations, where leaked data could have material consequences.

Experts familiar with cryptographic best practices stress that security of data in transit is only as strong as the combination of algorithms, key management, and implementation details. In this case, the deprecated 3DES, the uniform, hardcoded keys, and the lack of robust encryption controls collectively undermine the confidentiality guarantees that users rightfully expect. The implication for developers and platform owners is clear: audit and remediation are necessary, and a transition to contemporary cryptographic standards with sound key-management practices is non-negotiable for any future deployment.

The broader takeaway for the technology ecosystem is that encryption cannot be treated as an afterthought or a box-ticking exercise. It must be embedded into the product design from the outset, with explicit controls designed to minimize data exposure, limit access, and enforce strong, standardized cryptography. In the context of AI apps that process potentially sensitive user inputs, the security baseline must be strong enough to withstand a broad range of threat actors, from opportunistic interceptors to more organized adversaries who could target cross-border data flows.

Data routing, storage, and cross-border implications

A distinctive feature highlighted by the audit is the fact that data, at least in part, traverses infrastructure operated by entities associated with ByteDance, including a cloud platform developed by ByteDance’s cloud subsidiary. The data’s geolocation routing reportedly points toward servers in the United States and managed by a U.S.-based telecom provider, yet the privacy policy indicates storage of collected information on secure servers located in a jurisdiction under the People’s Republic of China. This arrangement raises a set of cross-border governance questions about data sovereignty, control, and the conditions under which information can be accessed by foreign authorities.

Cross-border data flows in AI and machine-learning ecosystems are inherently complex. When data leaves one legal jurisdiction for processing in another, it becomes subject to the legal frameworks, government requests, and national security considerations of both the origin and destination. The risk profile increases when the receiving jurisdiction has different norms around privacy protection, data retention, and access by law enforcement or state actors. Even when data is encrypted in transit, the potential for data at rest in the destination to be subject to access requests, or for data to be aggregated with other datasets held by the receiving entity, expands the potential exposure surface dramatically.

The privacy policy’s language that the data may be accessed, preserved, and shared with law enforcement or public authorities if there is a good-faith belief that it is necessary to comply with applicable law or government requests introduces a governance dimension that can influence user trust. While such language is not unusual in many privacy policies, the practical effect of cross-border data sharing becomes a central point of scrutiny for users, regulators, and the broader security community. The interplay between corporate data governance commitments and the realities of cross-border data processing is a critical area for ongoing oversight.

From an enterprise security perspective, reliance on infrastructure and cloud services controlled by a third party—especially a company with ties to a foreign owner—necessitates rigorous risk assessments, contractual protections, and visibility into data processing activities. Security teams would seek to ensure that data is minimized, that access is tightly controlled, and that there are clear incident-response and data-subject-rights procedures. In addition, auditors and regulators may require evidence of independent third-party security reviews, data localization considerations, and explicit transparency about data flows to alleviate concerns about data governance practices.

To the broader AI ecosystem, these dynamics illustrate the tension between rapid deployment and strict data governance. Many AI services rely on sprawling cloud and edge architectures to deliver performance and scale, but the governance implications of cross-border data processing demand careful policy design, robust technical controls, and ongoing risk monitoring. The situation with DeepSeek calls for a proactive approach to data handling that emphasizes minimization, encryption, and clear, user-centric privacy protections that are resilient across jurisdictions and regulatory environments.

Android vs iOS security posture and platform implications

The audit highlighted a concerning disparity between iOS and Android versions of the DeepSeek app. While the iOS variant was identified as having certain vulnerabilities, the Android version was described as even less secure in its configuration and data-handling practices. This assessment underscores a broader reality in mobile security: platform ecosystems have different default security stances, tooling, and enforcement mechanisms that can influence how apps store, transmit, and protect user data.

iOS security is often shaped by a combination of strict app review processes, the App Transport Security policy, and device-level protections. When an app circumvents or disables security controls such as ATS, it weakens the defensive posture that users expect from iOS environments. The fact that this lockdown or enforcement mechanism was globally disabled in the iOS app indicates a substantive deviation from platform-recommended security practices, which is particularly troublesome given the potential for cross-app data exposure, credential leakage, or misuse of API keys.

On Android, the ecosystem presents additional challenges due to fragmentation, variance in OS versions, and varying levels of enforcement of security best practices by app developers and device manufacturers. An evaluation that finds Android to be even less secure than iOS in this context implies that there are likely deeper architectural or implementation gaps, such as insecure data storage, insecure network communications, or misconfigured security controls that fail to provide consistent protections across a diverse device landscape. The security implications extend beyond a single platform to the reliability of the app across the entire user base, including corporate environments and BYOD programs where devices may be managed through different mobile device management (MDM) solutions.

From a governance standpoint, ensuring equivalent security standardization across platforms is a foundational requirement for any enterprise-grade or consumer-facing application. The divergence highlighted by the DeepSeek case emphasizes the need for comprehensive security-by-design practices that apply consistently across iOS and Android, with particular attention to data-in-transit protections, secure key management, and robust data governance policies. It also reinforces the importance of platform-level controls and vendor accountability to ensure that any deviations from recommended security frameworks do not occur, or at least are mitigated through compensating controls and transparent disclosure.

The broader takeaway for developers and security teams is clear: platform-specific weaknesses can undermine the overall security posture of a cross-platform application. When an app is deployed on multiple ecosystems, a failure to meet security expectations on one platform can erode user trust, invite regulatory scrutiny, and create reputational risk for both the developer and the hosting platform. Strengthening cross-platform security requires harmonized design principles, independent security validation, and continuous monitoring that accounts for the different threat models, configurations, and enforcement environments present in iOS and Android ecosystems.

Industry reaction and expert analysis

Security professionals and researchers have weighed in on the implications of these findings, emphasizing that disabling App Transport Security and using insecure transmission channels are serious missteps in modern app development. Experts cautioned that such practices are not defensible in today’s threat landscape, where attackers can exploit unencrypted traffic to harvest credentials, personal data, and other sensitive information. The absence of robust encryption in the initial data exchanges—especially during onboarding—adds a layer of vulnerability that could be exploited by opportunistic actors or more sophisticated adversaries.

Analysts also noted that the hardcoded and identical keys across users represent a vulnerability that scales with the user base. If attackers obtain the key, they could decrypt traffic across multiple users, effectively compromising the confidentiality of broad swaths of data. This kind of vulnerability is widely regarded as a fundamental security flaw, one that should have been identified and remediated during early development phases.

Commentary from industry veterans called attention to the broader implications for app developers who rely on third-party backends and cloud services. The fact that vendor infrastructure linked to ByteDance might be implicated in data handling adds another dimension of risk, as it exposes users to the governance and privacy practices of a foreign-owned company. Observers stressed that even with strong performance in AI capabilities, security and privacy considerations must not be compromised, particularly when dealing with sensitive information and cross-border data transfers.

Experts also cautioned about the risk of data exposure to third-party domains and databases that could be publicly accessible or inadequately protected. The discovery of publicly accessible backend data, including sensitive operational details and API secrets, reinforces the need for rigorous access controls, threat modeling, and secure configurations. The presence of such exposure in any AI app ecosystem is a clear red flag that warrants immediate remediation and independent verification.

From a policy and governance perspective, the security findings have drawn the attention of lawmakers and regulators who are evaluating the potential national security implications of foreign-owned AI services operating on government networks or bearing on sensitive data. While industry commentary centers on technical risk, the regulatory dimension highlights concerns about whether consumer or enterprise AI tools should be allowed on certain devices or networks, especially if they process data that could be sensitive or restricted by cross-border data policies. These debates reflect a growing trend toward stronger vetting, compliance expectations, and risk-based usage guidelines for AI applications in both public and private sectors.

Regulatory response and potential government actions

In the wake of the security concerns, policymakers began considering swift actions to mitigate national security risks associated with DeepSeek. One line of discussion involves a potential prohibition on using the app on government devices or restricted networks, reflecting broader priorities to minimize exposure to apps and services that could facilitate data collection by foreign-owned entities or that lack robust security controls. Proposals to ban the application on government devices could be expedited under emergency or security-clearance frameworks, depending on the jurisdiction and the specific agency’s risk calculus. If enacted, such a ban could be implemented within a relatively short timeframe, potentially within weeks, reflecting the urgency that security reviews carry for national security and critical infrastructure.

The broader regulatory landscape for AI and data privacy is evolving rapidly as governments increasingly scrutinize how AI tools handle personal data, how they process, store, and share information, and what governance measures are necessary to protect citizens. The DeepSeek case contributes to a growing body of evidence that regulators are willing to intervene when credible security concerns, cross-border data flows, and governance vulnerabilities intersect with consumer-facing AI technologies. As these discussions unfold, the industry should expect heightened demands for transparency, independent security validations, and stronger compliance with cross-border data protection standards.

Security professionals emphasize that any regulatory response should aim to balance innovation with risk mitigation. On one hand, AI innovations can bring substantial social and economic benefits, particularly in education, customer support, and productivity tools. On the other hand, policymakers must ensure that security best practices, data localization where appropriate, and privacy protections are enforceable across platforms and jurisdictions. The DeepSeek case illustrates the importance of adopting clear security benchmarks, publishing independent audit results, and establishing enforceable remediation timelines to restore user trust and to safeguard sensitive data.

For organizations using or considering AI tools, the regulatory signals translate into practical steps to enhance risk management. These steps include instituting robust data minimization strategies, enforcing end-to-end encryption with modern cryptographic standards, and ensuring that data processing adheres to strict data governance policies that align with local and international laws. Enterprises should also consider implementing vendor risk management programs to evaluate the security posture of third-party providers and to ensure that cross-border data flows comply with applicable legal requirements and best practices.

The evolving policy environment also highlights the need for clearer disclosures and user rights mechanisms. Consumers should expect transparent information about what data is collected, how it is used, who it is shared with, and under what circumstances it may be disclosed to authorities. Users should also expect redress mechanisms and avenues to opt out or delete data where appropriate. In practice, adopting these measures requires coordinated efforts among developers, cloud providers, platform owners, and regulators to establish a foundation of trust that supports safe and beneficial AI deployment.

Implications for AI ecosystems and safeguarding future deployments

The DeepSeek security findings resonate beyond a single product, offering a cautionary tale for the broader AI ecosystem. When AI services rely on open-weight models and external infrastructure, the security envelope must be designed with the same rigor as the AI capabilities themselves. The risk of data exposure, unauthorized access, or cross-border data handling underscores the importance of robust security practices that are integrated into the core architecture of AI applications.

Key implications for the future of AI tool deployment include:

  • Strengthening encryption by default: Free from deprecated algorithms, with keys generated and managed securely, and protected by device-bound resources and hardware-backed security modules where feasible.

  • Ensuring strong key management: Eliminating hardcoded keys, using per-user or per-session keys, and implementing key rotation, secure storage, and auditable key usage.

  • Enforcing transport security: Enforcing ATS or equivalent controls across platforms, and ensuring that no data is sent in the clear at any stage of data transmission, including onboarding and initial configuration.

  • Rigor in data governance: Defining explicit data collection boundaries, data retention policies, data sharing agreements, and data subject rights that reflect consumer and enterprise expectations for privacy and security.

  • Transparent cross-border data flows: Providing clear visibility into where data is stored, how it is processed, and which authorities can access it, along with mechanisms to minimize cross-border exposure where possible.

  • Independent security validation: Regular, public, and rigorous third-party security assessments that verify that the product adheres to modern security standards and that remediation actions are timely and effective.

  • Lifecycle security stewardship: Integrating security review gates into the product lifecycle—from design and development through deployment and ongoing operation—to ensure continuous protection as the app evolves.

  • Governance and accountability: Establishing clear accountability for security decisions, with documented policies, incident response plans, and ongoing governance oversight to ensure that security controls remain effective in practice.

For developers and organizations considering AI-based tools, the DeepSeek case underscores the risk of insufficient security controls and governance when data flows cross borders or involve third-party infrastructure. It reinforces the need for rigorous security-by-design principles, explicit data-handling policies, and proactive risk management that keeps pace with the rapid evolution of AI capabilities. As AI assistants become more embedded in daily workflows and enterprise operations, the security and privacy implications of their design choices will increasingly determine their legitimacy, adoption, and societal impact.

Conclusion

The security assessment of the DeepSeek iOS app reveals a constellation of issues that pose real risks to user privacy and data integrity. Unencrypted data transmission during onboarding, cross-border data traffic to ByteDance-controlled infrastructure, the use of a deprecated encryption scheme, and hardcoded cryptographic keys create a high-risk environment that security professionals view as unacceptable for modern mobile apps. The prospect that data could be decrypted or intercepted by an attacker, combined with potential cross-referencing with other datasets, underscores the urgency of remediation, independent validation, and a transparent approach to data handling.

Experts advocate for immediate removal of the DeepSeek application from vulnerable environments, especially where sensitive information may be at risk. Android versions appear to require even more stringent scrutiny, and broader ecosystem safeguards are necessary to prevent similar vulnerabilities across platforms. The evolving regulatory landscape adds weight to the call for governance improvements, including clearer data localization policies, enforceable security standards, and timely remediation timelines in response to credible findings.

For users and organizations relying on or considering AI chatbots, the episode serves as a stark reminder that cutting-edge capabilities must be matched by robust security practices and responsible data governance. Only through rigorous security design, vigilant oversight, and a commitment to transparent, privacy-respecting data handling can AI tools earn the trust they need to realize their transformative potential. As policymakers, security researchers, and industry stakeholders continue to scrutinize these tools, the DeepSeek case will likely become a touchstone for how the industry approaches encryption standards, data sovereignty, and cross-border data protection in the age of intelligent machines.