Social Icons

Showing posts with label privacy. Show all posts
Showing posts with label privacy. Show all posts

Saturday, December 21, 2024

The Trust Factor: Why Being Trusted Is More Valuable Than Being Liked today?

1.    In the age of social media, instant messaging, and online interactions, interpersonal relationships are no longer confined to face-to-face meetings. Whether it's connecting with old friends, forming new acquaintances, or navigating professional networks, most of us interact with others digitally in one way or another. And while being liked—charming, personable, or approachable—may seem like the key to building strong relationships, trust has become far more important in ensuring those relationships are safe, meaningful, and long-lasting.

Trust: The Foundation of Secure Relationships

2.    In the digital world, we’re not just concerned with connecting with others; we’re also navigating new risks that come with sharing personal information, emotions, and sometimes, vulnerabilities. Trust ensures that these interactions remain genuine, respectful, and protected. Whether you’re sharing a sensitive thought with a close friend over text or discussing business details in a virtual meeting, trust keeps these exchanges secure. Trust means that you believe the person on the other end won’t misuse the information you share, and that they have your best interests at heart.


3.    While likability is important in forming connections, it can be deceiving. A person may be extremely likable but also manipulative or deceitful. In the age of social media, people can project an image of themselves that may be far removed from reality, all in the pursuit of likes and validation. Trust, however, is built on consistency, transparency, and reliability. It takes time to build, but once it’s established, it’s a much stronger and more enduring foundation for any relationship.

The Dangers of Misplaced Trust in the Digital Age

4.    With so much of our lives online, the potential for exploitation grows. Cybercriminals often exploit likability and emotional appeals to manipulate people into giving up personal information, clicking on malicious links, or even transferring money. Social engineering attacks, like phishing, frequently prey on the human tendency to trust those who seem friendly or trustworthy. In these cases, likability becomes a weapon in the hands of cybercriminals.

5.    This is where trust becomes paramount. Trust isn’t just about feeling good about someone; it’s about knowing they have your security and privacy in mind. Whether it’s an online friendship or a business relationship, trusting that someone won’t betray your confidence is what keeps your interactions safe. People who are trusted respect boundaries, follow through on promises, and are transparent with their intentions. They don’t manipulate or take advantage of others for personal gain.

Trust Protects Personal Boundaries

6.    But I feel trust also works both ways. If someone is trusted by you, it means you feel safe with them—whether that means sharing passwords, sensitive documents, or just opening up emotionally. Without trust, these boundaries blur, and you might find yourself feeling vulnerable or exploited. Being liked won’t protect you from these risks—trust will.

Why Trust is the Key to Lasting Relationships

7.    Trust isn’t just about safety—it’s the cornerstone of a meaningful, lasting relationship. While likability might attract others to you in the short term, it’s trust that keeps them around. Without trust, relationships often fall apart. This is true in both personal and professional spheres. In personal relationships, trust fosters deep emotional connections and mutual respect. In professional settings, trust drives collaboration, accountability, and long-term success.

Trust Over Likeability

8.    In a world where digital interactions are ubiquitous and personal data is constantly at risk, trust has become the most valuable currency in relationships. While being liked might give you instant popularity or affection, it’s trust—built on integrity, transparency, and consistency—that ensures your relationships remain safe, genuine, and secure.

9.    Whether it’s in an online friendship, a romantic relationship, or a professional connection, trust protects your boundaries, secures your personal information, and helps your relationships stand the test of time. As we continue to navigate a world filled with digital threats and manipulation, it’s clear that trust is far more important than being liked.

In the end, it's trust that keeps us safe and helps our relationships grow deeper. And that’s what really matters.

Thursday, May 23, 2024

Navigating the AI Highway: Why Privacy and Bias Are the Brakes We Can't Ignore

    In the fast-paced world of technological advancement, artificial intelligence (AI) has emerged as a game-changer across every domain. From healthcare to finance, education to entertainment, AI promises unprecedented levels of efficiency, innovation, and convenience. However, amidst the excitement of AI's limitless potential, there looms a critical concern: the need for brakes to navigate this digital highway safely.

    Imagine launching a vehicle without brakes – the consequences would be disastrous. Similarly, if AI models are unleashed into the world without due diligence regarding privacy and bias, we risk hurtling headlong into a future fraught with ethical dilemmas and societal discord.


    Without robust safeguards in place, our most intimate details – from health records to browsing habits – could become fodder for manipulation or discrimination.

    Moreover, the spectre of bias casts a long shadow over AI's promise of objectivity. While algorithms are often hailed for their impartiality, they are, in reality, only as unbiased as the data they're trained on. If these datasets reflect historical prejudices or systemic inequalities, AI systems can inadvertently perpetuate and exacerbate these biases, amplifying social disparities and deepening divides.

SO WHAT TO DO?

    So, how do we steer clear of this perilous path? The answer lies in embracing responsible AI development and deployment. Just as brakes ensure the safety of a vehicle, robust privacy protections and bias mitigation strategies serve as the guardians of ethical AI.

    First and foremost, organisations must prioritise privacy by design, embedding data protection principles into the very fabric of AI systems. This entails implementing stringent security measures, anonymizing sensitive information, and obtaining explicit consent from users before data is collected or processed.

    Simultaneously, we must confront the spectre of bias head-on, conducting thorough audits and assessments to identify and mitigate discriminatory patterns within AI algorithms. By diversifying datasets, soliciting input from diverse stakeholders, and fostering interdisciplinary collaboration, we can cultivate AI systems that reflect the richness and diversity of the human experience.

    Transparency is another key ingredient in the recipe for responsible AI. Organisations must be forthcoming about their data practices and algorithmic decision-making processes, empowering users to make informed choices and hold AI systems accountable for their actions.

    So, as we hurtle down the digital highway of the 21st century, let us remember: the brakes of privacy and bias are not impediments to progress but rather the safeguards that ensure we reach our destination safely and ethically.

"Disclaimer: Portions of this blog post were generated with assistance from ChatGPT, an AI language model developed by OpenAI. While ChatGPT provided assistance in drafting the content, the views and opinions expressed herein are solely those of the author."

Thursday, April 04, 2024

From Likes to Privacy: Rethinking Approach to SHARENTING

      In the age of social media, parents are increasingly drawn into the world of "sharenting" – the practice of sharing photos, videos, and anecdotes about their children online. It's understandable; after all, who wouldn't want to share the joy of their child's first steps or that infectious smile with friends and family?

    However, what often begins as innocent sharing can have serious implications for our children's privacy and security. As parents, it's crucial to pause and consider the potential risks before hitting that 'post' button.

(Image generated by AI: https://gencraft.com/generate) 

    One of the primary concerns surrounding sharenting is the issue of consent. Children are unable to give informed consent to having their lives broadcasted online, yet their parents often do so without a second thought. What seems adorable or funny to us now may be deeply embarrassing or even harmful to our children as they grow older.

    Moreover, the internet is a vast and often unpredictable space. Every photo, video, or story shared about our children becomes part of their digital footprint, potentially accessible to anyone with an internet connection. This leaves them vulnerable to identity theft, cyberbullying, and even exploitation by malicious individuals.

    As parents, it's our responsibility to prioritize our children's privacy and safety above the temporary validation of likes and comments. Instead of seeking approval from strangers online, we should focus on creating meaningful connections and memories with our children in the real world.

    So, before you share that adorable photo or heartwarming anecdote, take a moment to consider the long-term consequences. Is it worth sacrificing your child's privacy for a few moments of online validation? Let's break free from the cycle of sharenting and safeguard our children's privacy for the future.

Sunday, December 10, 2023

Understanding Differential Privacy: Protecting Individuals in the Age of AI

In today's data-driven world, artificial intelligence (AI) is rapidly changing how we live and work. However, this progress comes with a significant concern: the potential for AI to compromise our individual privacy. Enter differential privacy, a powerful tool that strives to strike a delicate balance between harnessing the power of data and protecting individual identities.

What is Differential Privacy?

Imagine a database containing personal information about individuals, such as medical records or financial transactions. Differential privacy ensures that any information extracted from this database, such as trends or patterns, cannot be traced back to any specific individual. It achieves this by adding carefully controlled noise to the data, making it difficult to distinguish whether a specific individual exists in the dataset.

Again for example imagine you're in a crowd, and someone wants to know the average height of everyone around you. They could measure everyone individually, but that would be time-consuming and reveal everyone's specific height.Differential privacy steps in with a clever solution. Instead of measuring everyone directly, it adds a bit of "noise" to the data. This noise is like a small mask that protects individual identities while still allowing us to learn about the crowd as a whole.

In simpler terms, differential privacy is a way to share information about a group of people without revealing anything about any specific individual. It's like taking a picture of the crowd and blurring out everyone's faces, so you can still see the overall scene without recognising anyone in particular.

Here are the key points to remember:

  • Differential privacy protects your information. It ensures that your data cannot be used to identify you or track your activities.
  • It allows data to be shared and analyzed. This is crucial for research, development, and improving services.
  • It adds noise to the data. This protects individual privacy while still allowing us to learn useful information.

Another example : Imagine you're sharing your browsing history with a company to help them improve their search engine. With differential privacy, the company can learn which websites are popular overall, without knowing which specific websites you visited. This way, you're contributing to a better search experience for everyone while still protecting your privacy.

Differential privacy is still a complex topic, but hopefully, this explanation provides a simple understanding of its core principle: protecting individual privacy in the age of data sharing and AI.

Think of it like this

You want to learn the average salary of employees in a company without revealing anyone's individual salary. Differential privacy allows you to analyze the data while adding some "noise." This noise acts as a protective barrier, ensuring that even if you know the average salary, you cannot determine the salary of any specific employee.

Benefits of Differential Privacy

Enhanced privacy protection: Differential privacy offers a strong mathematical guarantee of privacy, ensuring individuals remain anonymous even when their data is shared.

Increased data sharing and collaboration: By protecting individual privacy, differential privacy enables organizations to share data for research and development purposes while minimizing privacy risks.

Improved AI fairness and accuracy: Differential privacy can help mitigate biases in AI models by ensuring that the models learn from the overall data distribution instead of being influenced by individual outliers.

Examples of Differential Privacy in Action

Apple's iOS: Differential privacy is used to collect usage data from iPhones and iPads to improve the user experience without compromising individual privacy.

Google's Chrome browser: Chrome uses differential privacy to collect data on browsing behavior for improving search results and web standards, while protecting the privacy of individual users.

US Census Bureau: The Census Bureau employs differential privacy to release demographic data while ensuring the privacy of individual respondents.

The Future of Differential Privacy

As AI continues to evolve, differential privacy is poised to play a crucial role in safeguarding individual privacy in the digital age. Its ability to enable data analysis while protecting individuals makes it a valuable tool for researchers, businesses, and policymakers alike. By embracing differential privacy, we can ensure that we reap the benefits of AI while safeguarding the fundamental right to privacy.

Remember, differential privacy is not a perfect solution, and there are ongoing challenges to ensure its effectiveness and efficiency. However, it represents a significant step forward in protecting individual privacy in the age of AI.

Saturday, February 06, 2021

BRAVE Web Browser : How much advertisements I blocked in a week ?

Hello friends, herewith first I am sharing screen shots of my brave browser dashboard that I captured over a week from 30 Jan 2021 to 6 Feb 2021.The screen shots I have taken pertain to morning hours whenever I login for the day. I am generally on my PC for about 8-9 hours a day. I majorly watch YouTube videos and surf across multiple sites in a day that range from 50-100 at times. So what I am bringing out here that how many ads Brave browser has blocked in this duration and whilst this surfing times.

30 Jan 2021 : 70517 blocked : Counter 0

 31 Jan 2021 : 77744 blocked : Counter 7227  in day( i.e. 30 Jan 2021)

01 Feb 2021 : 92342 blocked : Counter 14598  in day( i.e. 31 Jan 2021)

02 Feb 2021 : 97440 blocked : Counter 5098  in day( i.e. 01 Feb 2021)

03 Feb 2021 : 105386 blocked : Counter 7946  in day( i.e. 02 Feb 2021)

04 Feb 2021 : 108354 blocked : Counter 2968  in day( i.e. 03 Feb 2021)

05 Feb 2021 : 112036 blocked : Counter 3682  in day( i.e. 04 Feb 2021)

06 Feb 2021 : 114870 blocked : Counter 2834  in day( i.e. 05 Feb 2021)


So,total ads blocked over a week is 44353 that deduces to approx average of 6336 ads/day. That's phenomenal to say. Each of these trackers would be definitely linked to thousands of other trackers also giving them unrestricted access to all the user behavior ,privacy and profiles. While there is not much a normal user can do about these trackers, I feel just blocking these thousands of ads will not make any one absolutely free of tracking,but still better than surfing without blockers. Because if the ad companies know that users are blocking by means of plugins and such browsers,they would have already found ways to still track you. After all that's their means of living and minting economy.

Sunday, August 13, 2017

Whonix : Debian GNU/Linux based Security-focused Linux distribution

1.     Even if one is not doing anything wrong, he is being watched and recorded in real time as Edward Snowden revealed few years back. Most Internet users value online anonymity, with majority saying they have taken steps to remove or mask their digital footprints, and  reporting that they have taken steps to avoid being observed by specific people, organizations, or governments.Whonix is a Debian GNU/Linux based security-focused Linux distribution which aims to provide privacy, security and anonymity on the internet. The operating system consists of two virtual machines, a "Workstation" and a Tor "Gateway", running Debian GNU/Linux. All communications are forced through the Tor network.This post gives you screen-shots of installation and execution of the virtual appliances involved.

2.    The Gateway VM is responsible for running Tor, and has two virtual network interfaces. One of these is connected to the outside Internet via NAT on the VM host, and is used to communicate with Tor relays. The other is connected to a virtual LAN that runs entirely inside the host.

3.    The Workstation VM runs user applications and is connected only to the internal virtual LAN, and can directly communicate only with the Gateway, which forces all traffic coming from the Workstation to pass through the Tor network. The Workstation VM can "see" only IP addresses on the Internal LAN, which are the same in every Whonix installation.

4.  Download the two virtual machines ie the Gateway and the workstation from https://www.whonix.org/wiki/VirtualBox

5.   Once you download the two machines as above from the link in reference,the following screen-shots will assist you in installation of the same.The two downloaded files are seen below : 
Instead of typically creating a virtual machine and then mounting a vdi,in this case more simply we have to just import the .ova appliance,rest is in auto mode.
Next
Next
Agree to the T&C
Next
Will take few minutes loading
Next
Import
Agree again
Import appliance of the workstation
So u have two machines in the virtualbox console as seen in the bottom two listing below :
Just click both with the start button...and the machine start



Next
Next
Next
Ok
Updated TOR download



Here we see the IP address relating to Budapest Hungary....and thats surely not the user....:-)

Sunday, September 25, 2016

Privacy Concerns & Server Locations : Hike-Telegram-Whatspp

1.    I have always seen and observed discussions amongst my friends and circle about which Chat Messenger is safe and which is not in terms of safety and privacy aspects.Whether the servers are located inside the country or they are off-shores,how their data is shared and how is their privacy likely to be compromised bu third parties.....So to just do a over view check,here I present an over view of such FAQs in context of Server Locations and data sharing aspects,primarily sourced from the original websites.

Saturday, February 13, 2016

Computer Hacking is LEGAL @ GCHQ

1.  Privacy International , a UK-based registered charity that defends and promotes the right to privacy across the world, lost a case challenging RIGHT TO PRIVACY.

 

So as it stands the GCHQ now has a official tick to itself forcing into hacking devices to obtain intelligence thereby ensuring National Security interests.The court ruled in favor of GCHQ and thus for the first time the GCHQ has confirmed that it has been associated with hacking into IT and computer devices which till date were only thought in anticipation or were believed right based on the NSA whistle blower Edward Snowden.

 Source : http://www.wired.co.uk/news/archive/2015-03/20/gchq-hacking-faq

Source : http://www.wired.co.uk/news/archive/2015-03/20/gchq-hacking-faq

2.   An extract produced as follows from http://www.bbc.com/news/uk-politics-35558349

"Hackers can remotely activate cameras and microphones on devices, without the owner's knowledge, log keystrokes, install malware, copy documents and track locations among other things"

3.   Another extract produced below from the I
"The use of computer network exploitation by GCHQ, now avowed, has obviously raised a number of serious questions, which we have done our best to resolve in this Judgment. Plainly it again emphasises the requirement for a balance to be drawn between the urgent need of the Intelligence Agencies to safeguard the public and the protection of an individual's privacy and/or freedom of expression."

3.   How much of this stands right or wrong irrespective,but one thing has come out large and clear....there stands no privacy while anyone is on the net...whatever you may do or attempt from your mobile device or the computer,nothing is yours.....

Sunday, May 24, 2015

Android Factory Reset : How trustworthy from a PRIVACY view?

1.  It is an accepted fact that one can remove all data from Android devices by resetting it to factory settings, or doing a "force reset." One can do so by either using the Settings menu to erase all your data or by using the Recovery menu.It is also understood that by performing a factory data reset, all data — like apps data, photos, and music etc will be wiped from the device.This reset in most of the cases will be required as a maintenance issue or when the user decides to sell his mobile to some other third guy.Now when he does a factory reset for ensuring himself that all his/her data is removed from the mobile,there is a sad angle recently revealed in a paper named "Security Analysis of Android Factory Resets" by Laurent Simon and Ross Anderson@University of Cambridge available at http://www.cl.cam.ac.uk/~rja14/Papers/fr_most15.pdf  that proves with technical demonstrations to negate the fact that the data and all privacy of accounts goes with the reset.Read on further for brief details...

2.  Even with full-disk encryption in play, researchers found that performing a factory reset on Android smart-phones isn’t always what it’s assumed safe up to be.Researchers found the file storing decryption keys on devices was not erased during the factory reset and they were successfully able to access data “wiped” Android devices from a wide variety of sources, including text messages, images, video, and even third-party applications. What’s more, researchers were able to “recover Google authentication tokens”, thereby enabling them to sync up any data a user had tied to Google’s services, including private emails.The study unveils five critical failures:

- the lack of Android support for proper deletion of the data partition in v2.3.x devices;

- the incompleteness of upgrades pushed to flawed devices by vendors;

- the lack of driver support for proper deletion shipped  by  vendors  in  newer  devices  (e.g.  on  v4.[1,2,3]);

- the  lack  of  Android  support  for  proper  deletion  of  the internal  and  external  SD  card  in  all  OS  versions

- the fragility  of  full-disk  encryption  to  mitigate  those  problems up to Android v4.4 (KitKat)

RECOVERY DETAILS OF DATA BY RESEARCHERS

ATTRIBUTED REASON

3.   Smartphones  use  flash  for  their  non  volatile  memory storage  because  it  is  fast,  cheap  and  small.  Flash  memory is  usually  arranged  in  pages  and  blocks.  The  CPU  can read  or  write  a  page  (of  typically  512+16  to  4096+128 data+metadata  bytes),  but  can  only  erase  a  block  of  from 32   to   128   pages.   Each   block   contains   both   data,   and “out-of-band”  (OOB)  data.When  removing  a  file,  an  OS  typically  only  deletes  its name  from  a  table,  rather  than  deleting  its  content.  The situation is aggravated on flash memory because data update does not occur in place, i.e. data are copied to a new block to  preserve  performance,  reduce  the  erasure  block  count and  slow  down  the  wear.  This makes a vulnerable issue as realised here by both these researchers.

Wednesday, December 03, 2014

Harden your LinkedIn Settings : A Necessity Now

Most of us are part of various Social Engineering Sites and keep updating ourselves via status updates, pictures and tweeting small life updates. Related Privacy and Security issues in respect of these social engineering sites available is already a serious concern among users. Additionally for these all social engineering sites/applications whether accessible on a desktop or a mobile, we all are not so serious responding and interacting but that’s the difference when we see viz-a-viz LinkedIn. When it is LinkedIn…we are mostly serious…no jokes, no clips, no tagging, no personal comments, no WOWs…it’s all professional. And when most of us take it seriously, we also feed serious inputs on it. But do we take necessary precautions too?...I have mostly seen a negated curve amongst my friend circle….hardly anyone has spared time to configure LinkedIn Privacy and Security settings. In this post I bring you out basic and necessary configuration steps involved to harden your LinkedIn interface to the world.

Tuesday, July 29, 2014

Snowden Reveals : Projects to Profile YOU

1.  Documents revealed by Edward Snowden pertaining to the National Security Agency (NSA), US surveillance programs and US Intelligence Community partners abroad were released about a year back and revealed a horde of code named projects that were all intruding our lives in some way or the other.This post brings out the glossary of codenamed PROJECTS along with a small brief of what was the intent of the project.These have been listed here after I read " The Snowden Files" by Luke Harding.This long list is actually a miniscule of thousands hidden projects which all are after every bit of info that we all share digitally....skype...sms...mms..whatapp...fax,emails,chat,photos etc...thats all in all everything!!!!!


Blackfoot

The codename given to an NSA operation to gather data from French diplomats' offices at the United Nations in New York and this information was collected from bugged computer screens.

Accumulo

The name given to an open-source database created by the National Security Agency (NSA) but later made available to others via the Apache Foundation. It stores large amounts of structured and unstructured data across many computers and can use it to create near real-time reports.

Blackpearl

NSA has been spying on Petrobas, Brazil's largest oil company, through the "Blackpearl" program that extracts data from private networks.

Evening Esel

The NSA conducts its surveillance of telephone conversations and text messages transmitted through Mexico's cell phone network under the internal code name "Eveningeasel."

Angry Birds

Leaked documents indicate that the NSA and GCHQ routinely try to gain access to personal data from Angry Birds and other mobile applications.

Bullrun/Edgehill

The revelations claim that "vast amounts of encrypted Internet data which have up till now been discarded are now exploitable vide  Bullrun,a clandestine, highly classified decryption program run by the United States National Security Agency (NSA) and The British signals intelligence agency Government Communications Headquarters (GCHQ) with a similar program codenamed Edgehill.

Boundless Informant

A tool used by the NSA to analyse the metadata it holds. It aims to let analysts know what information is currently available about a specific country and whether there are trends can be deduced.

Cheesy Name

A GCHQ program designed to identify encryption keys that could be cracked by the agency's computers.

Dishfire

The codename for a system used to process and store SMS message data.A leaked 2011 NSA presentation, published by the Guardian, indicated it was used to collect about 194 million texts a day, adding that the content was shared with GCHQ.

Dropmire

The name for a way to bug security-enhanced fax machines to provide the NSA with access to documents that have passed through encrypted fax machines based in other countries' foreign embassies.

Genie

An NSA programme, identified in a leaked memo analysed by the Washington Post, which is said to involve the remote delivery of spyware to devices on foreign-controlled networks.

Marina

The NSA's tool to gather metadata about the online activity of targets and other internet users.The Marina metadata application tracks a user's browser experience, gathers contact information/content and develops summaries of target.

Thinthread

A proposed NSA system to chart relationships between people in real-time.

Muscular

A joint project operated by the NSA and GCHQ used to intercept data from the cable links that are used by Google and others to connect up their computer servers, which are located across the world .

Fallout

Identified by an alleged NSA slide, the term appears to refer to an effort to screen out metadata collected about US citizens as part of the Prism programme before it is analysed by the Marina and Mainway systems.

Nucleon

An NSA tool used to analyse voice data gathered via the Prism programme.

EgotisticalGiraffe

The alleged codename given to an NSA effort to track users of Tor (The Onion Router) - a project that aims to let people browse the web anonymously by bouncing their traffic through other people's computers.

Perdido

The codename for an NSA surveillance operation targeting the EU's offices in New York and Washington.

Prism

A surveillance system launched in 2007 by the NSA allows the organization to "receive" emails, video clips, photos, voice and video calls, social networking details, log-ins and other data held by a range of US internet firms including Apple, AOL, Facebook, Google (including YouTube), Microsoft (including Skype), Paltalk and Yahoo.

QuantumInsert

A technique used to redirect a target's computer to a fake website where it can be infected with malware.

Stellarwind

A metadata-collecting scheme from communications in which at least one party was outside the US, and none of the other parties could be known to be US citizens.
 
Tempora

The codename given to an operation to create a "buffer" to allow huge amounts of data to be temporarily stored for analysis and is run by GCHQ to hold content gathered from tapped fibre-optic cables for three days and metadata for 30 days so that both it and the NSA can search and analyse it before details are lost.

FoxAcid

A tool reportedly used by the NSA to study what vulnerabilities a target's computer has. It then uses this knowledge to infect the machine with malware via a web browser.

 

Sunday, July 27, 2014

Harden PRIVACY : PRIVACY BADGER Tool

1.    Till few years back PRIVACY as a word meant the state of being free from unsanctioned intrusion in physical life from your peers/friends/strangers but the whole meaning has taken a new dimension since Snowden released his HIDDEN FILES last year around June.Today not only NSA but a plethora of third party agencies are after you all to track you..profile you...read you.Though in my earlier posts here,I had given a mention of few tools like disconnect.me,Adblock Plus,Ghostery etc but with time technology has further improved and here in this post I discuss about PRIVACY BADGER that is a browser add-on that stops advertisers and other third-party trackers from secretly tracking where you go and what pages you look at on the web.  If an advertiser seems to be tracking you across multiple websites without your permission, Privacy Badger automatically blocks that advertiser from loading any more content in your browser.  To the advertiser, it's like you suddenly disappeared.Looks Interesting..!!!



3.   Once installed as seen above we get a red hexagon..indicating installed and this has color indicators as follows :
  • Green means there's a third party domain, but it hasn't yet been observed tracking you across multiple sites, so it might be unobjectionable. When you first install Privacy Badger every domain will be in this green state but as you browse, domains will quickly be classified as trackers.
  • Yellow means that the thirty party domain appears to be trying to track you, but it is on Privacy Badger's cookie-blocking "whitelist" of third party domains that, when analyzed, seemed to be necessary for Web functionality. In that case, Privacy Badger will load content from the domain but will try to screen out third party cookies and supercookies from it.
  • Red means that content from this third party tracker has been completely disallowed.
4.   Currently available for CHROME,here I have used the beta for Mozilla browser ...though the site says they will soon release the extension for other browsers incl opera and safari too.....!!!!

Friday, June 13, 2014

CLOUDUSB : Another way to secure yourself....

1.     "Cloud" has been easily the most buzzy term in past few years for the computing industry. Case in point is the CloudUSB distribution, a project that promises to provide automatic backups and data along with privacy protection. The cloud name is catchy but the security is far less than promised....it is actually a USB-based Linux distribution based on Ubuntu 10.04 LTS(though old but works for general user..now I m on 14.04 LTS though). The idea is that you can carry your own Linux distribution with you for use anywhere, thus allowing anyone to use Linux on any computer and keep their data safe in the event the USB key is lost.

2.    CloudUSB uses the Dropbox service to synchronize data, so users who don't already have a Dropbox account will need to set up an account before being able to use the synchronization service. CloudUSB sets up a data and private-data folder for keeping sensitive files in. The setup.sh script that comes with the distribution uses encfs to set up an encrypted directory. It appears the script isn't properly encrypting the directory, though. When the system is rebooted, it does use encfs to mount the Dropbox/private-data directory as Desktop/.private-data. A step by step screen shot is shown below...i run this on a Virtual Box....and this can be downloaded at http://cloudusb.net/?DOWNLOAD












and there you are ready sett...go!!!!!!!!!!!!!
Powered By Blogger