Google Cloud links poor credentials to nearly half of all cloud-based attacks

Cloud services with weak credentials were a prime target for attackers, often resulting in lateral movement attempts, a Google Cloud report found.

Dive Brief:

  • Cloud services accounts with weak or non-existent credentials were the most common entry point for attackers in the second half of 2024, Google Cloud said Wednesday in its Threat Horizons Report.
  • Attacks involving weak or no credentials accounted for nearly half of intrusions observed or studied by Google Threat Intelligence Group, Mandiant, Google Cloud’s Office of the CISO and other Google intelligence and security teams during the second half of last year. 
  • Misconfigurations in cloud services were the second most common initial access vector, representing more than 1 in 3 attacks Google Cloud studied. The report noted a sharp increase in compromised application programming interfaces and user interfaces, which accounted for almost 1 in 5 attacks during the second half of the year.

Google Cloud links poor credentials to nearly half of all cloud-based attacks | Cybersecurity Dive

Crypto-stealing malware uses OCR to find info in victim’s photo libraries

A malicious software development kit (SDK) used in Android and iOS apps has been found to use optical character recognition to scan victims’ photo libraries, looking for cryptocurrency wallet IDs and recovery key information.

Any cryptocurrency information it finds hiding within the victim’s photo libraries is transmitted back to the operators, who then use it to gain access to and drain the wallets of their currency.

While not entirely unimaginable, this is a pretty novel attack method, and many people take photos of, for example, important information for safekeeping. Advances in OCR, including Apple and Google’s own machine learning algorithms, now make it trivial to search for certain content amongst thousands of photographs quickly. 

bleepingcomputer.com 

DeepSeek

DeepSeek, a Chinese competitor to OpenAI’s ChatGPT, received massive public attention and soared to the top of the App Store download charts when in launched recently. Here are some of the security-related events that subsequently occured.

  • Harmonic Security took a look at the data privacy concerns around the Chinese AI company, highlighting vague statements about data retention within the People’s Republic of China. The AI security firm concluded that very few (0.21%) of its customer’s users were actually using DeepSeek though. harmonic.security  
  • DeepSeek limited signups amid a sudden wave of interest and in response to what it described as “large-scale malicious attacks on DeepSeek’s services”. theregister.com 
  • Lots of examples have been shared on social media of DeepSeek refusing to answer questions about topics the Chinese Communist Party deems sensitive, such as the Tiananmen Square Massacre. An analysis by PromptFoo of 1,156 prompts found that these “canned refusals” were given 85% of the time and were reasonably easy to circumvent, suggesting, they say, that the censorship is more of a “crude, blunt-force” implementation rather than deeply baked into the reasoning model itself. arstechnica.com 
  • The Chinese company appears to have pretty sloppy security engineering practices: Wiz security researchers found a publicly accessible database containing “a significant volume of chat history, backend data and sensitive information, including log streams, API Secrets, and operational details,” within ‘minutes’ of scanning DeepSeek’s infrastructure. The HTTP interface to the database allowed Wiz to run a SHOW TABLES; query, returning all the accessible tables. The log stream data may have included plaintext passwords and chat history. DeepSeek promptly fixed the issue after being notified. theregister.com 
  • Italy blocked DeepSeek over privacy concerns after the company told the Italian data protection regulator that it did not fall under the purview of GDPR. therecord.media