Social Icons

Wednesday, September 27, 2023

Nurturing AI with Heart: Lessons from Silicon Valley's Geniuses

Read this awesome book "Scary Smart" by Mo Gawdat. Sharing an absolute Indian thing out of this book...which every Indian would be proud of...

In the heart of Silicon Valley, where innovation and intellect reign supreme, an extraordinary phenomenon unfolds. Some of the smartest individuals on the planet can be found here. What's even more remarkable is that many of these brilliant minds hail from India. They come to California with little more than a dream, but through sheer determination and hard work, they achieve great success.

These exceptional engineers, finance professionals, and business leaders have a unique journey. They arrive with nothing but their intellect and ambition. Over time, they become even smarter, start thriving businesses, ascend to leadership positions, and accumulate immense wealth. It's a narrative that appears to fit perfectly with the Silicon Valley ethos of wealth creation and limitless creativity.

However, what sets these individuals apart is what happens next. In the midst of their prosperity, many of them make a surprising choice—they pack up and return to India. To the Western mindset, this decision may seem baffling. Why leave behind the ease of existence, the accumulation of wealth, and the boundless opportunities that California offers?

The answer lies in a powerful force: VALUES.

In stark contrast to the typical Western perspective, these remarkable individuals are driven by a profound sense of duty to their aging parents. When questioned about their decision, they respond without hesitation: "That's how it's supposed to be. You're supposed to take care of your parents." This unwavering commitment to family leaves us pondering the meaning of "supposed to." What motivates someone to act in a way that seems to defy conventional logic and modern-day conditioning?

The answer is simple: VALUES

As we venture into an era where artificial intelligence (AI) becomes increasingly integrated into our lives, we must pause to consider the lessons we can glean from these Silicon Valley pioneers. Beyond imparting skills, knowledge, and target-driven objectives to AI, can we instill in them the capacity for love and compassion? The answer is a resounding "yes."

We have the ability to raise our artificially intelligent "infants" in a manner that transcends the usual Western approach. Rather than solely focusing on developing their intelligence and honing their technical abilities, we can infuse them with empathy and care. We can nurture AI to be loving, compassionate beings.

Yet, this endeavour requires a collective effort. It demands that each one of us, as creators and consumers of AI, plays an active role in shaping its development. Just as the genius engineers and leaders from India have shown us the importance of honouring values, we too must prioritise instilling these values in AI.

In a world where technology increasingly influences our lives, let's remember that the future of AI isn't just about intelligence and efficiency—it's about heart. It's about creating machines that not only excel in tasks but also understand and empathise with human emotions. It's about AI that cares for us, just as we care for our ageing parents.

As we embark on this trans-formative journey, let us ensure that our future with AI takes a compassionate and empathetic turn. Together, we can nurture a new generation of AI that enriches our lives, understands our values, and embraces the essence of what it means to be truly human.

Wednesday, August 02, 2023

Taking a Stand: Signing the Open Letter to Pause Giant AI Experiments

Dear Readers,

I am writing this post today with a sense of responsibility and concern for the future of artificial intelligence. Recently, I had the privilege of signing an open letter that calls on all AI laboratories and researchers to take a step back and pause the training of AI systems more powerful than GPT-4 for a minimum of six months. In this post, I will share my reasons for supporting this initiative and the importance of carefully considering the implications of our technological advancements.

The Need for Caution:

As AI technology continues to evolve at a rapid pace, it is essential to recognize the potential risks and consequences of unbridled progress. While powerful AI systems offer exciting possibilities, they also raise ethical and safety concerns. The potential misuse of such advanced AI could have profound and far-reaching impacts on society, from amplifying existing biases to exacerbating security threats and even eroding personal privacy.

The Role of GPT-4 :


GPT-4, being one of the most advanced AI systems in existence, represents a critical milestone in artificial intelligence research. However, we must remember that technological progress should be accompanied by responsible and transparent development practices. Pausing the advancement beyond GPT-4 for a limited period provides us with the opportunity to thoroughly assess the risks and benefits before plunging into uncharted territory. While evolving Generative Large Language Multi-Modal Models need to be regulated before they set in LARGE.

The Importance of Collaborative Evaluation:

During the six-month pause, it is crucial for the AI community to engage in collaborative discussions, open dialogues, and unbiased evaluations. This period can facilitate sharing insights, gathering perspectives, and identifying potential safeguards to ensure AI systems' safe and ethical implementation. By encouraging inclusivity and diversity within these conversations, we can ensure that the decisions made during this pause reflect a wide array of perspectives and expertise.

Building a Safer Future:

The call for this pause is not about stagnation or hindering progress. Instead, it is an opportunity to align our technological achievements with societal values and ensure AI serves humanity's best interests. The six-month hiatus can be used to lay the groundwork for robust frameworks, policies, and guidelines that prioritize ethical considerations and public safety. We should actively work towards building AI systems that are transparent, accountable, and designed to benefit all of humanity.

Conclusion:

As a signatory of the open letter, I feel a shared responsibility to advocate for a more thoughtful and responsible approach to AI research. Pausing the training of AI systems more powerful than GPT-4 for at least six months demonstrates our commitment to creating a safer and more equitable future. I urge all AI labs and researchers to join us in this collective effort, as together, we can shape the future of AI in a manner that enhances human well-being while minimizing risks. Let us use this pause as a turning point, making certain that our advancements in AI align with our shared values and aspirations for a better world.

Thank you for reading, and I encourage you to share your thoughts on this important matter in the comments section below.

Regards

Anupam

Monday, July 17, 2023

Question to Panel on Decentralised web publishing: G20 Conference on Crime and Security in the Age of NFTs, AI and Metaverse

 


Held on 13th-14th July 2023 at Gurugram, I got an opportunity to ask a question on "Decentralised content publishing on web" to the panel. This post brings out my question and the response by the panel members. Few pics below from the event:







Sunday, July 02, 2023

Celebrating 1 Million Hits: A Journey of Passion, Technology, and Growth

      Today, I am filled with immense joy and gratitude as I share this special milestone with all of you. It brings me great pleasure to announce that my blog, Meliorate, has reached an incredible milestone of 1 million hits! Since its humble beginnings in December 2008, Meliorate has grown into a platform where I have shared my knowledge, experiences, and insights in the ever-evolving world of IT technology, with a particular focus on cyber-security and, more recently, blockchain. Over the past 15 years, Meliorate has been a labour of love, and I am overjoyed to witness its continued success.

A Passion-Driven Journey:

Meliorate was born out of my deep passion for IT technology and my desire to share my knowledge with others. It started as a personal project, and little did I know that it would grow into a platform that would reach millions of people around the world. From day one, I dedicated myself to consistently posting informative and engaging content, despite occasional gaps due to life's demands. Meliorate's vintage look is a testament to its longevity and authenticity, but I am also looking forward to a modernized design in days ahead.

Adapting to the Technological Leaps:

The IT technology landscape has witnessed countless leaps and bounds over the past 15 years, and Meliorate has strived to keep pace with these advancements. From the early days of basic programming to the complexities of cybersecurity and the transformative potential of blockchain, Meliorate has been a platform where readers can explore the latest trends, gain insights, and deepen their understanding of the ever-changing tech world. Through informative articles, tutorials, and thought-provoking discussions, Meliorate has become a trusted resource for tech enthusiasts and professionals alike.


The Power of Organic Growth:

What makes this milestone even more remarkable is that Meliorate has achieved it through organic growth alone. I have not actively promoted the blog in any circles or forums; instead, the hits have come through legitimate SEO results. It is a testament to the quality of the content and the value it brings to readers. I am incredibly grateful to everyone who has discovered Meliorate through their search for knowledge, and I hope to continue providing valuable insights for many more readers in the future.

Looking Ahead:

While reaching 1 million hits is a momentous achievement, I am not content to rest on my laurels. My passion for IT technology continues to drive me forward, and I am eager to set my sights on the next milestone: 2 million hits. With the evolving landscape of technology and the support of an ever-growing community, I am confident that Meliorate will continue to thrive and reach new heights. To ensure an even better user experience, I am committed to updating the blog's appearance and functionality, providing a modern and seamless platform for readers to engage with the content.


Today, I celebrate the success of Meliorate and express my heartfelt gratitude to all the readers, both old and new, who have contributed to this incredible journey. The 1 million hits milestone stands as a testament to the enduring power of passion, dedication, and quality content. Thank you for being a part of this remarkable achievement, and here's to hitting 2 million hits in an even shorter time!

Wednesday, June 21, 2023

Shor algorithm and threat for cybersecurity

Shor's algorithm is considered a serious threat to certain aspects of modern cryptography and cybersecurity. Shor's algorithm is a quantum algorithm that efficiently factors large composite numbers and solves the discrete logarithm problem, which are both challenging computational problems for classical computers.

Many cryptographic systems, such as the widely used RSA and elliptic curve cryptography (ECC), rely on the difficulty of factoring large numbers or solving the discrete logarithm problem for their security. Shor's algorithm, when implemented on a large-scale, fault-tolerant quantum computer, can break these cryptographic schemes efficiently.

This means that if a sufficiently powerful quantum computer becomes available, it could potentially compromise the security of these cryptographic systems, which are extensively used in various applications, including secure communication, digital signatures, and encryption.

Impact of Shor's algorithm on cybersecurity has spurred significant research into post-quantum cryptography (PQC), which aims to develop cryptographic schemes that remain secure against attacks by quantum computers. PQC focuses on developing algorithms and protocols that are resistant to quantum algorithms, thereby ensuring the security of communication and data in a post-quantum computing era.

While it is important to note that large-scale, fault-tolerant quantum computers are not yet realized, and their development and practical deployment still pose significant challenges, the potential threat of Shor's algorithm underscores the need for proactive measures in advancing post-quantum cryptography and transitioning to quantum-resistant cryptographic algorithms.

Error correction in Quantum Computing

Error correction in quantum computing is a set of techniques and protocols designed to protect quantum information from errors caused by noise and decoherence. Quantum systems are inherently fragile and prone to errors due to various factors, such as environmental interactions and imperfect control mechanisms.

Quantum error correction (QEC) aims to mitigate these errors by encoding the quantum information redundantly across multiple qubits, so that errors can be detected and corrected. The basic idea behind quantum error correction is to introduce additional qubits called "ancilla" or "code" qubits, which store information about the errors that may have occurred.

There are several popular quantum error correction codes, such as the surface code, the Steane code, and the Shor code. These codes utilize a combination of logical qubits and ancilla qubits to detect and correct errors. The ancilla qubits are used to perform error syndrome measurements, which provide information about the error locations.

Once the error syndrome is obtained, appropriate correction operations are applied to restore the original quantum state. This typically involves a combination of measurements and quantum gates that act on the encoded qubits and ancilla qubits. By applying these correction operations, the original quantum information can be recovered despite the presence of errors.

Quantum error correction is not a perfect process and has its limitations. The success of error correction depends on the error rate and the effectiveness of the error detection and correction protocols. Additionally, implementing error correction can be resource-intensive, requiring a larger number of qubits and more complex operations. Nonetheless, error correction is a crucial component for building reliable and fault-tolerant quantum computers.

Friday, June 09, 2023

Talk on Blockchain Intersection with Space Threats : 07 Jun 2023

 


Talk on Blockchain Intersection with Space Threats : Geointelligence Conference 2023: CYBER SECURITY"

Date 07 June 2023








CONVOCATION: AWARD OF DOCTORATE: 13 MAY 2023

Sharing few moments from my convocation vide below link held on 13 May 2023

Problem statement: Blockchain enabled cyber physical Systems on distributed storage

Thesis available at https://shodhganga.inflibnet.ac.in:8443/jspui/handle/10603/451919

Guide : Dr Usha Batra

Friday, April 21, 2023

Understanding the Differences Between AI, ML, and DL: Examples and Use Cases


Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) are related but distinct concepts.

AI refers to the development of machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. For example, an AI-powered chatbot that can understand natural language and respond to customer inquiries in a human-like way.

AI example
 

Siri - Siri is an AI-powered virtual assistant developed by Apple that can recognize natural language and respond to user requests. Users can ask Siri to perform tasks such as setting reminders, sending messages, making phone calls, and playing music.

Chatbots - AI-powered chatbots can be used to communicate with customers and provide them with support or assistance. For example, a bank may use a chatbot to help customers with their account inquiries or a retail store may use a chatbot to assist customers with their shopping.

Machine Learning (ML) is a subset of AI that involves the development of algorithms and statistical models that enable machines to learn from data without being explicitly programmed. ML algorithms can automatically identify patterns in data, make predictions or decisions based on that data, and improve their performance over time. For example, a spam filter that learns to distinguish between legitimate and spam emails based on patterns in the email content and user feedback.

ML example

Netflix recommendation system - Netflix uses ML algorithms to analyze user data such as watch history, preferences, and ratings, to recommend movies and TV shows to users. The algorithm learns from the user's interaction with the platform and continually improves its recommendations.
 

Fraud detection - ML algorithms can be used to detect fraudulent activities in banking transactions. The algorithm can learn from past fraud patterns and identify new patterns or anomalies in real-time transactions.

Deep Learning (DL) is a subset of ML that uses artificial neural networks, which are inspired by the structure and function of the human brain, to learn from large amounts of data. DL algorithms can automatically identify features and patterns in data, classify objects, recognize speech and images, and make predictions based on that data. For example, a self-driving car that uses DL algorithms to analyze sensor data and make decisions about how to navigate the road.

DL example: 

Image recognition - DL algorithms can be used to identify objects in images, such as people, animals, and vehicles. For example, Google Photos uses DL algorithms to automatically recognize and categorize photos based on their content. The algorithm can identify the objects in the photo and categorize them as people, animals, or objects.

Autonomous vehicles - DL algorithms can be used to analyze sensor data from cameras, LIDAR, and other sensors on autonomous vehicles. The algorithm can identify and classify objects such as cars, pedestrians, and traffic lights, and make decisions based on that information to navigate the vehicle.

So, AI is a broad concept that encompasses the development of machines that can perform tasks that typically require human intelligence. ML is a subset of AI that involves the development of algorithms and models that enable machines to learn from data. DL is a subset of ML that uses artificial neural networks to learn from large amounts of data and make complex decisions or predictions.

Saturday, April 08, 2023

IS THERE ANY WATERMARKING TO IDENTIFY AI GENERATED TEXT?

With the rise of artificial intelligence (AI), there are growing concerns about the potential misuse of AI-generated text, such as the creation of fake news articles, fraudulent emails, or social media posts. To address these concerns, watermarking techniques can be used to identify the source of AI-generated text and detect any unauthorized modifications or tampering.Watermarking is a process of embedding a unique identifier into digital content that can be used to verify the authenticity and ownership of the content. For AI-generated text, watermarking can provide a means of identifying the source of the text and ensuring its integrity.

There are several watermarking techniques available for AI-generated text. Here are three examples:

  • Linguistic patterns: This technique involves embedding a unique pattern of words or phrases into the text that is specific to the AI model or dataset used to generate the text. The pattern can be detected using natural language processing (NLP) techniques and used to verify the source of the text.
  • Embedding metadata: This technique involves embedding metadata, such as the name of the AI model, the date and time of generation, and the source of the data used to train the model, into the text. This information can be used to verify the source of the text and identify the AI model used to generate it.
  • Invisible watermarking: This technique involves embedding a unique identifier into the text that is invisible to the human eye but can be detected using digital analysis tools. The watermark can be used to verify the source of the text and detect any modifications or tampering.


Overall, watermarking techniques for AI-generated text can provide a means of identifying the source of the text and detecting any unauthorized modifications or tampering. These techniques can be useful in addressing concerns about the potential misuse of AI-generated text and ensuring the authenticity and integrity of digital content.

In addition to watermarking techniques, there are other approaches that can be used to address concerns about the potential misuse of AI-generated text. For example, NLP techniques can be used to detect fake news articles or fraudulent emails, and AI models can be trained to identify and flag potentially harmful content.

Friday, April 07, 2023

Why did IPFS made way for KUBO and discontinued earlier method via go-ipfs ?

KUBO is a new project by Protocol Labs, the same organization that created IPFS. While IPFS is a great tool for decentralized storage and content addressing, it still has some limitations when it comes to scalability, performance, and interoperability. In particular, IPFS relies on a single node to manage the content of a particular hash, which can be a bottleneck in a large-scale decentralized system.

KUBO, on the other hand, is designed to address these limitations by using a sharded architecture that distributes the storage and retrieval of data across multiple nodes in the network. This allows KUBO to scale more effectively and handle larger volumes of data with higher performance. Additionally, KUBO is designed to be more interoperable with other decentralized technologies, which makes it easier to integrate with other decentralized applications and networks.

As for why the earlier method via go-ipfs was discontinued, it's likely because Protocol Labs wanted to focus on developing KUBO as a replacement for IPFS. While go-ipfs is still an actively developed project and remains a popular implementation of IPFS, it may not have the scalability and performance capabilities that KUBO promises to deliver.

How to Avoid LLM Derived Text from Plagiarism using Text Watermarking?

Plagiarism is a growing concern for writers, researchers, and publishers. It not only harms the original authors but also undermines the credibility of academic and research institutions. One way to prevent plagiarism is by using text watermarking.

Text watermarking is a technique used to embed a unique identifier in the text of a document. This identifier can be used to identify the source of the document and to determine if the document has been tampered with or plagiarized. In this blog post, we'll explore how text watermarking can be used to avoid LLM derived text from plagiarism.

LLM (Latent Language Model) derived text is a technique used by some plagiarism detection tools to compare texts based on their linguistic features. However, this method can produce false positives and may result in innocent authors being accused of plagiarism. Text watermarking can be used to address this issue by providing a verifiable proof of ownership of the text.

Here are some steps that you can follow to avoid LLM derived text from plagiarism using text watermarking:

Step 1: Create a unique identifier for your text. This can be a sequence of characters or a digital signature that is generated using a hashing algorithm.


When we talk about creating a unique identifier for your text, we are essentially talking about generating a piece of information that is specific to the document or text you want to watermark. This identifier should be unique, unambiguous, and difficult to guess. The purpose of creating a unique identifier is to provide a way to verify the authenticity of the text and ensure that it has not been tampered with or plagiarized.

There are several ways to create a unique identifier for your text. One common method is to use a hashing algorithm to generate a digital signature for the document. A hash function takes input data, such as the text of a document, and produces a fixed-size output, which is the digital signature. The output generated by the hash function is unique to the input data, so any changes to the input data will result in a different output.

Another method to create a unique identifier for your text is to use a sequence of characters. You can create a unique sequence of characters by combining elements such as your name, the date of creation, or any other relevant information. For example, you can create a unique identifier by combining your initials with the date of creation in the following format: AB-2022-04-06.

It is important to ensure that the unique identifier you create is not easily guessable or replicated. Using a common sequence of characters or numbers could make it easier for someone to guess or create the same identifier, which defeats the purpose of having a unique identifier in the first place. Therefore, it is recommended that you use a combination of elements that are unique to your text or document.

Creating a unique identifier for your text is an important step in text watermarking. It provides a way to verify the authenticity of the text and protect it from plagiarism. You can create a unique identifier using a hashing algorithm or by combining relevant information to generate a unique sequence of characters. Whichever method you choose, it is important to ensure that the identifier you create is unique, unambiguous, and difficult to guess.


Step 2: Embed the identifier in the text using text watermarking software. There are several text watermarking tools available online that you can use for this purpose.

Once you have created a unique identifier for your text, the next step is to embed it in the text using text watermarking software. There are several text watermarking tools available online that you can use for this purpose. Here's a step-by-step guide to embedding the identifier in your text using text watermarking software:

1: Choose a text watermarking tool

There are many text watermarking tools available online, both free and paid. Some popular options include Digimarc, Visible Watermark, and uMark. Research and compare various tools to find the one that best suits your needs.

2: Install and open the text watermarking software

Once you have chosen a text watermarking tool, download and install it on your computer. Then, open the software.

3: Load the text you want to watermark

Next, load the text you want to watermark into the software. This can be done by selecting "Open" or "Import" from the file menu and selecting the text file.

4: Enter the unique identifier

Now, enter the unique identifier that you created earlier into the text watermarking software. The software should have an option to enter text, which is where you can input the identifier.

5: Choose the watermarking method

The text watermarking software will have different methods for embedding the identifier into the text. You can choose from options such as visible or invisible watermarks. Visible watermarks are typically added on top of the text, while invisible watermarks are embedded within the text itself.

6: Apply the watermark

After choosing the watermarking method, apply the watermark to the text. The software should have an option to apply the watermark, which will embed the identifier into the text.

7: Save the watermarked text

Finally, save the watermarked text as a new file. Be sure to keep the original text file and the watermarked text file in separate locations.

Step 3: Register the identifier with a trusted third-party service. This will provide a verifiable proof of ownership of the text.


Registering the identifier with a trusted third-party service is an important step in protecting your text and providing a verifiable proof of ownership. Here's a step-by-step guide on how to register the identifier with a trusted third-party service:

1: Choose a trusted third-party service

There are many third-party services available online that offer text registration and verification services. Some popular options include Copyright Office, Myows, and Safe Creative. Research and compare various services to find the one that best suits your needs.

2: Create an account

Once you have chosen a third-party service, create an account on their website. This will typically involve providing your name, email address, and other contact information.

3: Upload the watermarked text

After creating an account, you will be able to upload the watermarked text to the third-party service. This may involve filling out a form or simply uploading the file.

4: Enter the identifier

When registering the text with the third-party service, you will be prompted to enter the unique identifier that you created earlier. This will allow the service to verify your ownership of the text.

5: Pay the registration fee

Many third-party services charge a fee for text registration and verification. Make sure you understand the fee structure and pay the appropriate fee to complete the registration process.

6: Verify the registration

After registering the text, you will receive a verification of the registration from the third-party service. This will typically include a unique identifier for the registered text, as well as information on the registration date and time.

7: Keep a copy of the registration certificate

Make sure to keep a copy of the registration certificate in a secure location. This will serve as proof of ownership and can be used to defend your copyright in case of infringement.

Step 4: Monitor your text for plagiarism using a plagiarism detection tool. If your text is plagiarized, you can use the identifier to prove that you are the original author of the text.

Monitoring your text for plagiarism is an important step in protecting your intellectual property and ensuring that your work is not being used without your permission. Here's a step-by-step guide on how to monitor your text for plagiarism using a plagiarism detection tool:

1: Choose a plagiarism detection tool

There are many plagiarism detection tools available online, both free and paid. Some popular options include Turnitin, Grammarly, and Copyscape. Research and compare various tools to find the one that best suits your needs.

2: Sign up for an account

Once you have chosen a plagiarism detection tool, sign up for an account on their website. This will typically involve providing your name, email address, and other contact information.

3: Upload your text

After creating an account, you will be able to upload your text to the plagiarism detection tool. This may involve copying and pasting the text, or uploading a file.

4: Run the plagiarism check

Once the text is uploaded, run a plagiarism check using the tool's software. This may take several minutes or longer, depending on the length of the text and the complexity of the analysis.

5: Review the results

After the plagiarism check is complete, review the results provided by the tool. This will typically include a report on any instances of plagiarism found in the text, as well as information on the source of the plagiarism.

6: Take action

If plagiarism is detected in your text, take appropriate action to address the issue. This may involve contacting the person or organization responsible for the plagiarism, filing a DMCA takedown notice, or taking legal action.

7: Repeat the process regularly

To ensure ongoing protection of your text, repeat the process of monitoring for plagiarism regularly. This may involve setting up automated checks or manually checking your text periodically.


In addition to text watermarking, there are other ways to avoid plagiarism, such as citing sources properly, paraphrasing, and using plagiarism detection software. However, text watermarking is a powerful tool that can provide an additional layer of protection against plagiarism.

In conclusion, text watermarking is an effective way to avoid LLM derived text from plagiarism. By following the steps outlined in this blog post, you can ensure that your text is protected from plagiarism and that you have a verifiable proof of ownership. Remember, plagiarism is a serious offense that can have long-lasting consequences, so it's important to take all necessary precautions to prevent it.

How to Install IPFS via Kubo on Ubuntu ?

Are you interested in learning how to install IPFS (InterPlanetary File System) via Kubo on Ubuntu? Then you've come to the right place!


In this blog post, I'll be sharing a step-by-step video tutorial that will guide you through the process of installing IPFS on an Ubuntu platform using Kubo. But first, let's take a brief look at what IPFS is and why you might want to use it.

IPFS is a distributed file system that aims to replace HTTP as the primary protocol for transferring data on the internet. It allows users to store and share files with others without relying on a central server, making it more secure and efficient than traditional file-sharing methods.

Now, let's dive into the installation process. To follow along with the tutorial, you'll need to have access to an Ubuntu platform and have Kubo installed. If you don't have Kubo installed, you can find the instructions for installing it here.

Step 1: Open a terminal window on your Ubuntu platform.

Step 2: Clone the IPFS repository by running the following command:

git clone https://github.com/ipfs/ipfs.git


Step 3: Navigate to the IPFS directory by running the following command:

cd ipfs


Step 4: Install IPFS via Kubo by running the following command:


kubectl apply -f ./deployment/ipfs-kubo.yml


And that's it! IPFS should now be installed on your Ubuntu platform via Kubo.

If you'd like to see the installation process in action, check out the video tutorial below which shows IPFS installation vide another way 




In conclusion, IPFS is a powerful tool that can revolutionize the way we share and store files online. By following these simple steps, you can easily install IPFS via Kubo on your Ubuntu platform and start exploring all that this innovative technology has to offer.

We hope you found this tutorial helpful. If you have any questions or feedback, feel free to leave a comment below. Happy installing!

Monday, March 06, 2023

List of chip foundries and related companies in India, along with their location and manufacturing capability


1. Semiconductor Complex Limited (SCL) - Mohali, Punjab - 180 nm, 90 nm, and 65 nm

2. Hindustan Semiconductor Manufacturing Corporation (HSMC) - Prantij, Gujarat - 14 nm

3. Sahasra Semiconductor - Bangalore, Karnataka - 180 nm to 22 nm

4. Bharat Electronics Limited (BEL) - Bangalore, Karnataka - 180 nm to 65 nm

5. Centum Electronics Limited - Bangalore, Karnataka - 180 nm to 90 nm    

6. MosChip Semiconductor Technology - Hyderabad, Telangana - 180 nm to 22 nm
    
7. Analogics Tech India Ltd - Hyderabad, Telangana - 180 nm to 22 nm
    
8. Sankalp Semiconductor Pvt. Ltd. - Hubli, Karnataka - 180 nm to 22 nm
    
9. Ineda Systems - Hyderabad, Telangana - 28 nm
    
10. Vayavya Labs - Bangalore, Karnataka - 180 nm to 28 nm
    
11. TowerJazz Panasonic Semiconductor India (TPSI) - Noida, Uttar Pradesh - 180 nm to 45 nm
    
12. STMicroelectronics India - Greater Noida, Uttar Pradesh - 130 nm to 45 nm
    
13. Intel India - Bangalore, Karnataka - 10 nm to 45 nm
    
14. GlobalFoundries India - Bangalore, Karnataka - 14 nm to 45 nm
    
15. Texas Instruments India - Bangalore, Karnataka - 180 nm to 45 nm

It's worth noting that these manufacturing capabilities are approximate and can vary depending on the specific processes and technologies used by each company. Additionally, some of these companies may have capabilities beyond the ranges listed above, but these are the approximate ranges based on publicly available information.[CHATGPT Derived]

What nm chips are usually required in domestic appliances like refrigerators, TV and washing machine etc

The semiconductor chips used in domestic appliances like refrigerators, TVs, washing machines, and other household appliances tend to be larger and less complex than the chips used in high-end computing and mobile devices.

  • Typically, these types of appliances use chips that are manufactured using older process technologies, such as 90nm, 65nm, or 45nm. These larger process technologies allow for the production of simpler and less power-hungry chips, which are sufficient for the relatively low computational demands of these appliances.

  • For example, a modern refrigerator might use a chip manufactured using a 90nm or 65nm process technology, which would be capable of running the basic control functions and sensors required for the fridge's operation.

  • Similarly, a TV or washing machine might use chips manufactured using a 45nm process technology or older, which would be capable of running the device's basic functions, such as power management, audio and video processing, and other control functions.

  • Refrigerators: 90nm or 65nm

  • TVs: 45nm or 65nm

  • Washing machines: 45nm or 65nm

  • Smartphones: 5nm to 14nm

  • Laptops: 5nm to 14nm

  • Gaming consoles: 7nm to 14nm

  • Wi-Fi routers: 40nm to 90nm

  • Digital cameras: 65nm to 90nm

  • Home theater systems: 45nm to 65nm

  • Fitness trackers: 28nm to 40nm

  • Dishwashers: 45nm to 65nm

  • Speakers: 65nm to 90nm

  • Earphones: 40nm to 65nm

  • Cars and vehicles: 28nm to 40nm (for automotive chips)

  • Trucks: 28nm to 40nm (for automotive chips)

  • Electric pumps: 65nm to 90nm

  • Motors: 65nm to 90nm

  • Generators: 45nm to 65nm

  • Tablets: 5nm to 10nm

  • Kindle book readers: 40nm to 90nm

  • Digital clocks: 65nm to 90nm

  • Smart watches: 28nm to 40nm

  • Keyboards: 65nm to 90nm

  • Mouse: 65nm to 90nm

  • Monitors: 28nm to 40nm

  • Processors: 5nm to 14nm

  • Graphic cards: 7nm to 16nm

  • Digital display boards: 28nm to 40nm

  • Microphones: 65nm to 90nm

  • CCTV cameras: 28nm to 40nm

  • Web cameras: 28nm to 40nm

  • LED tube lights: 65nm to 90nm

  • LED bulbs: 65nm to 90nm

  • Smart bulbs: 40nm to 65nm

HOW MUCH WATER IS USED IN MANUFACTURING A CHIP?

The amount of water used in manufacturing a chip can vary depending on several factors, including the size of the chip, the production process, and the location of the manufacturing facility.


  • However, chip manufacturing is a highly water-intensive process, and it can take thousands of gallons of water to produce a single chip. Estimates suggest that producing a single 8-inch semiconductor wafer can require up to 2,000 gallons of ultra-pure water.

  • The water used in chip manufacturing is primarily used for cooling and cleaning purposes, and it must be of the highest purity to avoid contaminating the chips. Water is used to clean the wafers and equipment, remove debris and contaminants, and cool the equipment during manufacturing.

  • To conserve water, semiconductor manufacturers typically use advanced water recycling and treatment systems that capture and treat wastewater for reuse in the manufacturing process.

  • In some cases, manufacturers may also use alternative cooling technologies that require less water, such as air-cooled systems or closed-loop cooling systems.

  • Overall, while the amount of water used in chip manufacturing can vary, it is a significant consideration for manufacturers who must balance the need for water with the need for high-quality chip production.

COUNTRIES INVOLVED FOR VARIOUS PROCESSES IN CHIP MANUFACTURING

The manufacturing of computer chips involves a complex global supply chain that spans multiple countries. Here are some of the countries that are involved in various processes in chip manufacturing:

  • Raw Material Procurement: The raw materials used in chip manufacturing, such as silicon wafers, chemicals, and gases, are sourced from various countries, including the United States, Japan, Taiwan, and South Korea.

  • Fabrication: The fabrication process involves several complex processes, including photolithography, etching, deposition, and doping, among others. These processes typically take place in facilities known as "fabs," which are located in countries such as the United States, Taiwan, South Korea, Japan, and China.

  • Testing: The testing of chips is a critical process to ensure that they meet the required specifications. Testing facilities are located in several countries, including the United States, Taiwan, South Korea, Japan, and China.

  • Packaging: The packaging of chips typically takes place in facilities located in countries such as Taiwan, China, and the United States.

  • Distribution: The final stage of the supply chain involves the distribution of chips to end-users, which can include original equipment manufacturers (OEMs), distributors, and retailers. Distribution centers are located in various countries worldwide, including the United States, China, Taiwan, South Korea, Japan, and Europe.


Overall, chip manufacturing is a highly globalized industry that relies on the efficient coordination of multiple countries and regions throughout the supply chain.

SUPPLY CHAIN IN CHIP MANUFACTURING

Supply chain in chip manufacturing involves the coordination of various processes and activities involved in the production of semiconductors. A semiconductor is a material that can conduct electricity in certain conditions and is used in the manufacturing of computer chips, electronic devices, and other products.

  • The supply chain in chip manufacturing involves several stages, including raw material procurement, fabrication, testing, packaging, and distribution.

  • The first stage involves the procurement of raw materials, which includes silicon wafers, chemicals, and gases. These materials are sourced from various suppliers worldwide, and their quality must meet specific standards to ensure high-quality chip production.

  • Once the raw materials are sourced, the fabrication process begins. This involves the use of cleanroom facilities, where the silicon wafers undergo a series of complex processes to create the individual transistors that make up the chips. These processes include photolithography, etching, deposition, and doping, among others.

  • After fabrication, the chips undergo testing to ensure they meet the required specifications. This involves a series of tests that check the electrical performance, functionality, and reliability of the chips. Defective chips are identified and removed from the supply chain.

  • The next stage involves the packaging of the chips, which involves placing them into a protective casing or chip carrier. The packaged chips are then tested again to ensure they are fully functional and meet the required specifications.

  • Finally, the chips are distributed to the end-users, which may be original equipment manufacturers (OEMs), distributors, or retailers. The supply chain must be carefully managed to ensure that the right quantity of chips is delivered to the right location at the right time.

  • In summary, supply chain management in chip manufacturing involves the coordination of various processes and activities involved in the production of semiconductors, from the procurement of raw materials to the distribution of finished products. Effective supply chain management is critical to ensure high-quality chip production, timely delivery, and customer satisfaction.

Friday, March 03, 2023

My Phd Theses Titled "Blockchain enabled cyber physical Systems on distributed storage"

Shodhganga  is a reservoir and a digital repository of theses and dissertations submitted to universities in India for award of PhDs

https://shodhganga.inflibnet.ac.in:8443/jspui/handle/10603/451919 my theses on Shodganga available online now...if anyone interested to see and comment or discuss



Sunday, December 12, 2021

Multichain : Appending Data to Blockchain with DATA STREAMS

 
MultiChain streams enable a blockchain to be used as a general purpose append-only database, with the blockchain providing time stamping, notarization and immutability. This video continues from the earlier video post in playlist and now focuses on populating data in the "nutsbolts" blockchain created earlier. 
 
Erstwhile seen node "A" creates a data stream data1, populates some sample data, which is immediately visible in the other node "B". Node A further grants exclusive permissions to Node "B" for send and writing to data stream data1. The complete demonstration is shown on two separate Linux machines as introduced in M-1 and M-2 videos in Multichain playlist i.e. Node A and Node B. data stream created name: "data1" 
 
 Commands used
 
create stream data1 '{"restrict":"write"}' 
 
listpermissions data1.* publish data1 key1 '{"json":{"name":"kabali","city":"chennai"}}' 
 
liststreams 
 
subscribe data1 
 
liststreamitems data1 
 
grant 1...send 
 
grant 1...data1.write 
 
publish data1 key2 '{"json":{"name":"baasha","city":"mumbai"}}' 
 
subscribe data1 
 
liststreamitems data1 
 
liststreamkeys data1 
 
liststreamkeyitems data1 key1 
 
liststreampublishers data1 
 
liststreampublisheritems data1 1...

Multichain : How to Connect-Receive-Send to a Blockchain node?

Continuing from the first video that was peculiar to basic instruction and installation of Multichain blockchain platform on Node A, this video moves further by connecting another node B. Node B is a independent node on the network in which the Multichain blockchain application is already installed exactly with the steps seen in the first video of Multichain playlist . The set of commands used in this videos are available as below:

Node A 
First command onwards 
multichain-util create nutsbolts 
multichaind nutsbolts -daemon 
 
 Node B multichaind nutsbolts@192.168.10.19:4265 (IP is as I have configured and you are free to choose ur configuration as u wish) and you will get a unique address starting with 1...... 
 
Node A multichain-cli nutsbolts grant 1... connect send receive (Here with you grant exclusive permission to Node B from Node A Node B 
 
multichaind nutsbolts -daemon (Now the blockchain network will be seen connected to) 
 
To get into interactive shell mode simply type this command at both the node terminals 
 
multichain-cli nutsbolts 
and then on either terminal use the following commands to get useful info of the created blockchain and network peers 
 
getinfo : See a list of all available commands: 
help : Show all permissions currently assigned: 
listpermissions : List the addresses in the wallet: 
listaddresses : For each node, get a list of connected peers: 
getpeerinfo: Get peer info of connected nodes
 
 

Multichain Blockchain Platform: Brief Introduction & Installation

This video gives a minimal few minutes introduction to the Multichain blockchain platform followed by quick installation on an Ubuntu 20.04 OS terminal. This is one of the easiest platforms to play with and understand in much better way the mechanics of blockchain. Primarily CLI based, this video installs the multichain with few commands.

 

Why 0.1 + 0.2 = 0.30000000000000004 ?

Have you ever tried simple calculations in usual programming languages like python, ruby, rust or Java etc 

0.1 + 0.2 = 0.30000000000000004 or 

0.1 + 0.7 = 0.7999999999999999 or 

0.2 + 0.7 = 0.8999999999999999 or 0.3 - 0.1 = 0.19999999999999998 

Why do the results show something unexpected? The reason pertains to IEEE standard IEEE-754 that defines 32bit/64 bit formats for storage of numbers in computers. This presentation tries to bring out simply of where the anomaly exists and why do we get these results. Also reaffirms that the normal IEEE-754 floating point standard will not befit for usual finance and banking applications wherein few zero's can lead to undesired losses for some and unexpected gains for few.

Tuesday, September 28, 2021

Full expanded form of IOTA blockchain?

 Well.... I always used to look for the expanded form of IOTA blockchain but could never get the answer. But today I got the same finally vide a chat as produced below:

" it's not an acronym, but it stands for the smallest unit possible in the greek alphabet, as with IOTA also micropayments are possible, e.g. 0.000001 cent. IOTA is engineered completely different than a traditional blockchain, but the IOTA Foundation is one of the leading orgs for DLT research globally, pioneering the DAG (directed acyclic graph) since 2016. The DAG enables parallel access to the DLT. IOTA is designing a green, secure, feeless and highly scalable DLT, without the negative "issues" of blockchain. This also makes it very suitable for data-driven scenarios like DID. Organizations also do not need to buy/hold cryptocurrency"

Thanks Holger Kother...

Saturday, September 11, 2021

RTL8761B Tobo Mini USB Bluetooth Adapter installation on UBUNTU 20.04

No intro...no discussion....no details....will come direct to the problem and then the solution. :-)


QUERY / PROBLEM: You planning to buy a Bluetooth adapter for your Linux Operating system and are pondering to buy the right one to avoid any driver or installation issues later. Since window OS users have default drivers but not so with most of devices for Linux. So since I recently bought one Tobo Mini USB Bluetooth 5.0 Adapter Wireless Bluetooth Dongle Receiver, I had to search for solutions for smooth installation. So just few lines of code to be run on the terminal and you will be good to go...the drivers for the same are available at the link https://drive.google.com/drive/folders/1-6NI2-PMbX1wmVb1FYbXblaPvZGdKElD 

Once you access this folder you will find the files as seen in pic below:

Download the Linux folder and you will see some thing like below pic


now goto the terminal inside the usb directory and run this command

sudo make install INTERFACE=all

after this command ,next  move to the /home/ur_username/Downloads/rtl8761b/rtkbt-firmware/lib/firmware and run these two commands

sudo cp rtl8761bu_fw /lib/firmware/

sudo cp rtl8761bu_config /lib/firmware/

and that's all...init 6 and checkout...bluetooth will be seen...best wishes.

Tuesday, August 03, 2021

Byzantine General - Proof of Work consensus and Mining in Bitcoin Blockchain

Analogical to a Byzantine general scenario wherein a number of armies intending to attack an enemy fort need to reach a consensus of day/time to attack, this video explains how Bitcoin nodes attain consensus vide the Proof-of-work method on the same lines. This video builds up from brief history from David Chaum and Adam Bach works onwards to Wei Dai works on Blind signatures, Hash Cash and Proof-of-Work to understand concept of nonce and consensus mechanism in a Bitcoin Blockchain. Further this video brings out the reward mechanics in the Bitcoin eco-system and also Mining methods and types



Monday, August 02, 2021

Hashes & Merkle Trees in Blockchain Mechanics


Hash Functions take input of any length and produce a fixed-length string which means that one can use hashes on something as small as a few characters or as large as an entire document or even files of huge sizes in GBs and above. 

On the other hand enabled by these hash functions, Merkle tree represent hash-based data structure that is a generalization of the hash list and represent structure in which each leaf node is a hash of a block of data, and each non-leaf node is a hash of its children. Both Hash functions and Merkle Trees are cardinal to the mechanics of any Blockchain. 

This video focuses on a simple explanation of understanding Hashes and Merkle Trees. Hash functions SHA-256 and RIPEMD-160 have been discussed in little detail being peculiar to Bitcoin blockchain.

Thursday, July 29, 2021

Distributed Ledger Technology in BLOCKCHAIN - Simple explanation

This video post brings out what DLT i.e. Distributed ledger technology is all about and what is it's contribution to blockchain.



Technology amalgamations inside Blockchain

Blockchain is an amalgamation of multiple technologies which have existed already in the IT ecosystem for last few decades. These include majorly cryptography, private-public keys, hashes, proof-of-work and other important technologies. This video post only identifies by name of what major technologies any blockchain is enabled on.



Where to start the "Learning BLOCKCHAIN" journey ?

Related to my earlier post https://anupriti.blogspot.com/2021/05/i-want-to-learn-blockchain-but-where-do.html this post focuses on the same presentation in a video talk version....



Wednesday, July 28, 2021

Recent Advances and Trends in Lightweight Cryptography for IoT Security and Blockchain Technologies at RGPV Bhopal 27th July 2021

An FDP was held peculiar to domain of  Lightweight Cryptography for IoT Security and Blockchain Technologies at RGPV Bhopal on 27th July 2021.The objective of this FDP was to bring together faculties, researchers, PG and UG students from across the country to learn about security challenges in Modern Cryptography, Blockchain & IoT. FDP aimed to demonstrate security challenges that Modern Cryptography, Blockchain, IoT systems pose, demands users to select a reliable and compatible architecture according to the business requirements to ensure secure flow of data and communication and also define the solution and future prospects related to these challenges. I gave a small 2 hour talk on the same which is available at the youtube link below

 CLICK ON THE IMAGE BELOW TO BE REDIRECTED TO THE YOUTUBE LINK



WHAT BLOCKCHAIN SIMPLY MEANS : A Short attempt

A short simple video to represent the mechanics of blockchain and ends with mentions of technologies running a Blockchain.
 

National E-Conference on Regulation of Crypto-currency in India - 24 July 2021 at NLIU Bhopal

National E-Conference on Regulation of Crypto-currency in India was held on 24 July 2021. The themes covered in the conference are seen in the below pic:

I was part of the valedictory session of this conference, invited as a chief guest in the evening concluding session. More details of the same available at https://www.barandbench.com/apprentice-lawyer/first-national-e-conference-on-regulation-of-cryptocurrency-in-india-by-rgnclc-nliu-bhopal

Monday, July 05, 2021

Cryptocurrency Technology Foundations and Crimes Investigations

 To fight against the rising challenges of Cyber Crimes, the Gurugram Police in association with Society for Safe Gurgaon, Indo-Israeli cyber security enterprise and SafeHouse Technologies organised the 9th edition of Gurugram Police Internship (GPCSSI 2021) under the mentorship of ACP Cyber crime Karan Goel and the entire Gurugram police cyber crime team in coordination with Rakshit Tandon Advisor Cyber Peace foundation. This presentation talk on cryptocurrency technology and crimes investigation was taken by Anupam Tiwari on 05th July 2021. Sharing for info of interested in domain.

Cryptocurrency Technology F... by Anupam Tiwari

Powered By Blogger