Saturday, October 20

Facebook and AD Campaigns



Is it true, bots are clicking on the ads in facebook?

Yes, its True!

Limited Run Company noticed it could only verify about 20 percent of the clicks that were supposedly being converted to users showing up on its Web site.

80% of clicks it was paying for were coming from bots.

Bots are computer scripts or fake users who simply repeatedly click the ads.
  

What if you complain this to facebook?

If a complaint is send to Facebook, then it demands for the below information:
  1. Server Logs
  2. Aggregated counts of your clicks

What are these?

1. Server Logs

Raw server logs of all clicks coming to your website, or the total amount of all clicks coming from Facebook, with an explanation of how you filtered them.

These server logs must contain:
  1. Timestamp of page load
  2. User agent string
  3. User IP
  4. Exact page loaded, with the parameters passed

 2. Aggregated counts of your clicks (Optional)

If possible, please also include the following:
  1. The total number of clicks you received from Facebook split by day, for the specific time period where you have noticed the click issues.
  2. The total number of clicks you were billed for, by Facebook, also by billable day for the period in question.
  3. A screenshot of your external reporting system showing the total number of clicks received from Facebook.


How to get Server Logs?

For Tomcat Application Server, it can be achieved by setting the value of ‘Access Log Valve’ tag, in pattern attribute as ‘combined’ in server.xml.

<Valve className="org.apache.catalina.valves.AccessLogValve" pattern="combined" directory="logs" prefix="yoursitename_access" suffix=".txt" />
  

Techniques to find or block bots


1. Unique Cookie is one way to find out whether a user has registered multiple times from the campaign or not.

When a user registers from a campaign, server should store a unique cookie value to that user’s browser. So the next time a user registers from that browser, server can find out by looking at the cookie stored.


2. Create a blacklist of user agents of known bots like FeedBurner, Googlebot.

Whenever a request comes, compare their user agent string with the blacklist, to determine whether these are bots or not. Or even a prevention step can also be taken from these identified bots.

This implementation has a performance hit because we have to check each user request user agent to the blacklist.

Also some user agents are real users (hired by ad agency for cheap labour). No blacklist is going to have those real users.


3. Flag bot vs. non-bot.

Log information about browsers which cannot execute Javascript, and then built some sort of system that analyzes these browsers and their trends (like, if they originate from a certain IP range).
 Javascript is something that is on by default and for a user to turn it off is an explicit manual action. No user wants to turn off the javascript, unless for doing any suspicious activity or he is a guy who hates the word ‘java’.

But this means cannot catch the bots until after the analysis.

4.  Whenever user registers, send a verification code to the user’s email or sms to authenticate the identity.
 

5.  Captcha
captcha block bot technique

           But not good for user experience and may even prevent irritated users from registering.  


References: