How to get rid of all your followers on Facebook

 

Why?


Thought that harnessing 10s of thousands of followers on Facebook was cool? Until you reached half the way and realized that it's the most useless thing in the world?

Here's how to get rid of all your followers and regain hold on your facebook id.

It's not as simple as clicking one button


Note again that we aren't talking about you un following someone else. Rather it's about forcing all your followers to unfollow you.

What doesn't work
- Restricting or turning off your "Follow" option. Won't work. Won't make your already followers unfollow you. Tried that, fails.
- Deactivating your account and reactivating : And you will discover all your followers are still with you after reactivation. Nothing changed.

What works


Blocking a follower and then unblocking him.

That will make his id unfollow you without any side effect. But unless you want to spend a year tapping on every single follower from your list for blocking him once and then unblocking him -- you won't want to do it manually. Specially when you already have thousands of followers. What a mess.

Automation helps in these cases.

Automating the process : The Challenges


- Logging into facebook from your script.
- Parsing and extracting the URLs you care about.
- Not to exceed FB's call limit which might wake the giant.

Logging into facebook


Log into facebook from mbasic.facebook.com which is lighter weight than the regular web page.

In chrome open the developer tools from Menu - More Tools - Developer Tools, or press Shift-Ctrl-I. Click on "Network".

Click on the page where you want to click and then watch the API calls showing off on the Developer Tools. Right click on the request you find interesting and the select Copy - Copy as cURL.

You will find a cURL command readily copied into your clipboard that will mimic exactly that call when run from terminal. With all the cookies and browser identity.

Example :


curl 'https://mbasic.facebook.com/example?v=followers&lst=552028175%3A552028175%3A1619628724&refid=17' \
-H 'authority: mbasic.facebook.com' \
-H 'sec-ch-ua: " Not A;Brand";v="99", "Chromium";v="90", "Google Chrome";v="90"' \
-H 'sec-ch-ua-mobile: ?1' \
-H 'upgrade-insecure-requests: 1' \
-H 'dnt: 1' \
-H 'user-agent: Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Mobile Safari/537.36' \
-H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9' \
-H 'sec-fetch-site: same-origin' \
-H 'sec-fetch-mode: navigate' \
-H 'sec-fetch-user: ?1' \
-H 'sec-fetch-dest: document' \
-H 'referer: https://mbasic.facebook.com/example?refid=8' \
-H 'accept-language: en-US,en;q=0.9,bn;q=0.8,ar;q=0.7,pt;q=0.6' \
-H 'cookie: sb=example; _fbp=example; datr=-ZBsYBgM8725ieW5bMu4gDTA; c_user=example; wd=1280x719; spin=r.1003698545_b.trunk_t.1619600502_s.1_v.2_; presence=AAcUp-f2WTegvayyOaaQrwpQnssgEePfwHI7uKPCHBp99; fr=1BgiZI7.d0.AAA.0.0.BgiZI7.AWV5jKyxuwo; dpr=2' \
--compressed

Managing cookie


Watch the cookie in the request? This might change over time and call. Here's how to manage the cookies.

Get Chrome's "Get Cookies.txt" extension from here.

Install it and it will show for any site "Download all cookies for this site as cookies.txt file". Click on the button and a cookies.txt file will get saved onto your "~/Downloads" folder.

Use that file as your cookie jar in curl calls. Remove the line in curl call "-H 'Cookie..." and instead add these two lines.


curl ...
-b fbcookie.txt \
-c fbcookie.txt \

Periodically update this cookie.txt file if login fails.

Extracting URLs


Use XPath for it. There isn't a linux XPath tool readily available that I know of, but you can write one easily. As in the following one written in PHP

#!/usr/bin/env php
<?
if($argc<2){
print "Usage: xpath <expression>
";
exit(0);
}
$dom=new DOMDocument();
@$dom->loadHTML(file_get_contents($argc >= 3 ? $argv[2] : "php://stdin"));
$xp=new DOMXpath($dom);
$xp->preserveWhiteSpace=false;
$ret=$xp->query($argv[1]);
foreach($ret as $v){
$val='';
if(isset($v->value)) $val=$v->value;
else $val=$v->data;
print trim(preg_replace("/[

]+/"," ",$val))."
";
}

And then use it to extract all links from the downlaoded HTML page like this


cat file.html | xpath //a/@href

Will print all links found in the file. Parse the output lines and extract the ones you need.

Last : Rate Limit


Blocking and then again ublocking 80 pesons/day seems like a safe bet. Run on over 1000 ids in 24 hours and your id will get restricted for 24 hours. Rather put your script in a cronjob running every hour or two over small batches. That should do it.