Storj Collaborates with CUDOS to Enhance Cloud Storage and Compute Solutions
Tuesday, August 20, 2024 4:43 PM
1,377
Storj partners with CUDOS to advance distributed compute and cloud storage solutions, leveraging NVIDIA’s chips for the expanding AI market. The collaboration aims to provide scalable and cost-effective solutions for businesses and developers. Storj’s recent partnerships with cunoFS and AIOZ Network further enhance AI and video workflows in the decentralized cloud storage space. Competitors include Filecoin, Arweave, and AIOZ Network’s W3S. The DePIN market has seen increased funding for projects like IoTeX.
Related News
3 days ago
W2140 EXPO Highlights Titan Network and Pnuts.AI InnovationsOn November 12, 2024, the W2140 EXPO, a premier global AI and Web3 conference, was inaugurated in Bangkok. Co-hosted by the Asian Business Association of Thailand and the Thai government, the event attracted participation from over 1,000 organizations and more than 200,000 attendees, marking it as the largest conference of its kind. During the event, members of the Titan Network core team engaged in meaningful discussions with UN staff and Dr. James Ong, a prominent scholar and founder of the Artificial Intelligence International Institute (AIII). Dr. Ong's keynote speech, titled "AI and Web for Humanity from the Global Majority," emphasized the importance of decentralized technologies in the modern landscape.
Dr. Ong highlighted Titan Network and its ecosystem partner, Pnuts.AI, as exemplary models within the AIDePIN and AIDeHIN frameworks. He praised Titan for developing a decentralized physical infrastructure network (DePIN) that leverages blockchain to utilize idle resources. This innovation offers a decentralized, secure, and transparent alternative to traditional cloud services, potentially saving up to 96% in costs. Additionally, he commended Pnuts.AI for being the most powerful real-time translation tool available, designed to break down language barriers using AI and Web3 technologies, providing rapid and accurate speech-to-speech translations in over 200 languages.
Furthermore, Dr. Ong discussed the future potential of Pnuts.AI as a standout Web3 project, envisioning a seamless integration of AI, Web3, and DeHIN. In this approach, top human language experts will collaborate with AI systems to enhance translation accuracy significantly. These experts will also provide extensive digital training materials to improve translation models, while Web3 mechanisms will incentivize cooperative human-AI efforts, fostering a robust AI-Web3 application ecosystem. This integration promises to revolutionize the way we approach language translation and communication in a globalized world.
5 days ago
Revolutionizing AI Efficiency: The Impact of the L-Mul AlgorithmThe rapid development of artificial intelligence (AI) has led to significant advancements across various sectors, yet it comes with a hefty environmental price tag due to its high energy consumption. AI models, particularly those utilizing neural networks, require substantial computational power, which translates to enormous electricity usage. For example, running ChatGPT in early 2023 consumed approximately 564 MWh of electricity daily, equivalent to the energy needs of around 18,000 U.S. households. This energy demand is primarily driven by complex floating-point operations essential for neural network computations, making the search for energy-efficient solutions critical as AI systems grow in complexity.
Enter the L-Mul (Linear-Complexity Multiplication) algorithm, a groundbreaking development that promises to significantly reduce the energy burden associated with AI computations. L-Mul operates by approximating floating-point multiplications with simpler integer additions, which can be integrated into existing AI models without the need for fine-tuning. This innovative approach has demonstrated remarkable energy savings, achieving up to 95% reduction in energy consumption for element-wise tensor multiplications and 80% for dot product computations. Importantly, this energy efficiency does not compromise the accuracy of AI models, marking a significant advancement in the quest for sustainable AI.
The implications of L-Mul extend beyond mere energy savings; it enhances the performance of AI models across various applications, including transformer models and large language models (LLMs). In benchmarks such as GSM8k and visual question answering tasks, L-Mul has outperformed traditional floating-point formats like FP8, showcasing its potential to handle complex computations efficiently. As the demand for AI continues to rise, L-Mul stands out as a pivotal solution that not only addresses the energy crisis associated with AI but also paves the way for a more sustainable future in technology development.
5 days ago
Integrating OpenAI with Solana Using Lit ProtocolIn a groundbreaking integration, Lit Protocol has demonstrated how to securely combine the capabilities of OpenAI and the Solana blockchain. By utilizing Wrapped Keys on Solana, developers can sign responses generated by the OpenAI API within a Lit Action. This integration opens up a myriad of innovative applications, particularly in the realm of AI-powered autonomous agents. These agents can operate on the blockchain without exposing sensitive API keys, thanks to Lit's threshold-based Programmable Key Pairs (PKPs) and Trusted Execution Environments (TEE). This ensures that all sensitive operations remain protected, allowing AI agents to interact with both blockchain and traditional web services while maintaining decentralized identities.
The integration also emphasizes the importance of private compute and data processing. By encrypting data and executing large language model (LLM) prompts within Lit’s TEE, developers can ensure that sensitive information, such as medical records or financial data, remains secure throughout the process. The TEE provides hardware-level isolation, meaning even node operators cannot access decrypted data. This end-to-end encryption allows for the secure processing of private information, ensuring that all computations occur within a secure environment before results are re-encrypted and sent back.
Furthermore, the integration facilitates the generation of cryptographic proofs for training and inference. By restricting PKP signing permissions to specific IPFS CID hashes, developers can guarantee the authenticity of LLM-generated content. This proof system is particularly beneficial for audit trails and compliance requirements, as it enables third parties to verify the authenticity of the content produced by the LLM. Overall, this integration showcases the potential of combining AI with blockchain technology, paving the way for more secure and efficient applications in the future.
5 days ago
Stratos Partners with DeepSouth AI to Enhance Web3 ApplicationsStratos has announced an exciting partnership with DeepSouth AI, a prominent player in the field of artificial intelligence that utilizes neuromorphic computing technology. This collaboration aims to merge DeepSouth AI's cutting-edge AI capabilities with Stratos's decentralized infrastructure solutions. The goal is to create more intelligent and accessible decentralized applications within the Web3 ecosystem, enhancing the overall functionality and user experience of these applications.
DeepSouth AI is in the process of developing a versatile platform that is equipped with a comprehensive suite of powerful AI tools. These tools are specifically designed to assist developers and enterprises in implementing advanced AI solutions. By integrating with Stratos's robust and scalable infrastructure, DeepSouth AI will benefit from a decentralized storage solution that offers reliability, security, and performance, essential for supporting high-demand AI-driven applications.
Through this strategic collaboration, Stratos is set to provide the necessary decentralized infrastructure to meet the high-volume data needs of DeepSouth AI's platform. This partnership is poised to usher in a new era of Web3 applications, where artificial intelligence and decentralized technology can work in harmony, ultimately driving innovation and accessibility in the digital landscape.
6 days ago
io.net and NovaNet Partner to Enhance GPU Verification with zkGPU-IDIn a significant move to enhance security and reliability in decentralized computing networks, io.net, a decentralized physical infrastructure network (DePIN) specializing in GPU clusters, has formed a partnership with NovaNet, a leader in zero-knowledge proofs (ZKPs). This collaboration aims to develop a groundbreaking solution known as zero knowledge GPU identification (zkGPU-ID), which will provide cryptographic assurances regarding the authenticity and performance of GPU resources. By leveraging NovaNet's advanced ZKP technology, io.net will be able to validate that the GPUs utilized within its decentralized platform not only meet but potentially exceed their advertised specifications, thereby enhancing user trust and resource reliability.
Tausif Ahmed, the VP of Business Development at io.net, emphasized the importance of this partnership, stating that optimizing coordination and verification across a vast network of distributed GPU suppliers is crucial for building a permissionless and enterprise-ready decentralized compute network. The integration of NovaNet's zkGPU-ID will allow io.net to continuously validate and test its GPU resources on a global scale, ensuring that customers can confidently rent GPUs that are reliable and meet their specified needs. This initiative represents a significant advancement in the decentralized compute infrastructure, aiming to alleviate concerns regarding resource authenticity and performance.
Moreover, the zkGPU-ID protocol utilizes NovaNet's zkVM (zero-knowledge virtual machine) technology, which plays a vital role in generating and verifying cryptographic proofs of GPU specifications at lower costs. Wyatt Benno, Technical Co-Founder of NovaNet, highlighted the necessity of ZKPs operating across various devices and contexts for privacy and local verifiability. The zkEngine from NovaNet rigorously tests and identifies GPUs within io.net's platform, creating a ZKP that ensures GPU integrity. This partnership sets a new standard for transparency, reliability, and security in decentralized GPU compute networks, marking a pivotal step forward in the industry.
7 days ago
Falcon Mamba 7B: A Breakthrough in Attention-Free AI ModelsThe rapid evolution of artificial intelligence (AI) is significantly influenced by the emergence of attention-free models, with Falcon Mamba 7B being a notable example. Developed by the Technology Innovation Institute (TII) in Abu Dhabi, this groundbreaking model departs from traditional Transformer-based architectures that rely heavily on attention mechanisms. Instead, Falcon Mamba 7B utilizes State-Space Models (SSMs), which provide faster and more memory-efficient inference, addressing the computational challenges associated with long-context tasks. By training on an extensive dataset of 5.5 trillion tokens, Falcon Mamba 7B positions itself as a competitive alternative to existing models like Google’s Gemma and Microsoft’s Phi.
Falcon Mamba 7B's architecture is designed to maintain a constant inference cost, regardless of input length, effectively solving the quadratic scaling problem that plagues Transformer models. This unique capability allows it to excel in applications requiring long-context processing, such as document summarization and customer service automation. While it has demonstrated superior performance in various natural language processing benchmarks, it still faces limitations in tasks that demand intricate contextual understanding. Nevertheless, its memory efficiency and speed make it a compelling choice for organizations looking to optimize their AI solutions.
The implications of Falcon Mamba 7B extend beyond mere performance metrics. Its support for quantization enables efficient deployment on both GPUs and CPUs, further enhancing its versatility. As the AI landscape evolves, the success of Falcon Mamba 7B suggests that attention-free models may soon become the standard for many applications. With ongoing research and development, these models could potentially surpass traditional architectures in both speed and accuracy, paving the way for innovative applications across various industries.