Foursquare is a popular location based Social media platform
Recently I have been getting to grips with the Foursquare API.
I really love playing with APIs and the data you can tap in to is often very impressive.
The key data to draw from 4sq is the user check-ins to see where a person has been. From this you can then build upon the data with various visulisations or validations, such as:
Visualising check-ins on a map
Comparing user check-in history
Customer Loyalty apps
The most important part is establishing a method to perform the API calls and properly handle the data returned.
The official Foursquare developer portal has a lot of good information to assist with this, but below is an open source app written by myself.
Cool Story Bro
Jump to the bottom for the good stuff
These days, computers and the internet have all but replaced the written letter. Gone are the days that you would sit and pen a note to loved ones, or write a strongly worded letter of complaint.
In fact, it seems that you can live day to day without ever having to put pen to paper.
(Unless you choose to bank with German “Postbank” who are an absolute bunch of fucking Luddites and for whom it is permanently 1985, alas, I digress)
Viddy well, little brother. Viddy well
Here is something that comes up quite a lot in SEO tools that I put together.
to be honest I’m mostly posting this here so I can find it later when I need it in future projects
Particularly for linkbuilding it makes sense not to build multiple links from the same domain.
When doing this alone on a small project, this isn’t really a problem. But when working on a big project, or a project with several people working on it, you need a method to check a potential link source against a list domains from which a link has already been placed.
All Links are equal but some are more equal than others
Why are links important for SEO?
Links are important because they are a good metric for representing how valuable a website is.
Compared to other metrics such as traffic, which is often kept private, links are a publicly visible way of gauging how popular or important a website is.
In very basic terms, a link can be seen as a vote.
The more votes you get, the higher you rank in Google.
Whilst that serves as a nice analogy, the truth is a little more complex.
Unlike in a democratic vote in which all votes carry equal weight, in the Google algorithm; All links are equal, but some are more equal than others.
Mobile Browsers Are Smaller.
Today I want to talk about how to optimise your website for mobile browsers.
With the ever increasing use of mobile devices for accessing the internet, it would be foolish to overlook how your website performs on mobile browsers. There are many things to take into account when considering how best to present the content of your website to mobile users, not least that the screen size is drastically smaller, and navigation is often touchscreen based.
Some of the main things that need to be taken into account are that any navigation must be large and well spaced, and text should be presented in a large, clear format.
The Subdomain Solution (Bad)
I have often thought this...
Ok, So the privacy trick is more of a trick/workaround than an exploit, and is by no means some kind of security hole/security exploit, but it none the less gives up some data that most people would assume to be private.
The basic premise is very simple.
If you set your Friends list as private, but your friends do not, your friends list is not (entirely) hidden, and there is nothing you can do about it
The “exploit”, if you can even really call it that, relates to the privacy settings of your “Friends List” on Facebook.
Keeping the British end up...
Seems the people at Twitter are “Player Haters” and have suspended the account
Was fun while it lasted!
Good morning Internets.
Last night on the way to a party, I was walking down the icy cold Berlin streets and was was struck with a cool idea.
I’d never built a twitter bot before, but figured it couldn’t be too hard. I have worked with the twitter API a few times and its pretty straight forward.
So, this morning, with last nights Gin n Tonics fermenting nicely into a comfortable Gin headache, I put together the British English Bot.
It periodically checks twitter for American spelt words, and offers corrective suggestions to the tweeter..
My fellow internets,
The internet, we know and love, finds itself in grave jeopardy.
A new bill being proposed in the US seeks to make fundamental changes to the way the internet works.
The bill has received almost unanimous criticism from all of the major online corporations (See the list below).
The bill amounts to government censorship of the internet and uses exactly the same kind of filtering that is already in place in countries such as China, Iran, and Syria.
Online piracy is a problem, nobody would dispute this, but you don’t cure a headache by cutting your head off.
Duplicate Content Causes Confusion
Perhaps the biggest, most common, On-Page SEO error that I see across the web, has to be Duplicate Content. The sad thing is this problem is common across all kinds of websites from small time bloggers, to large media websites, and (perhaps the most unforgivable of all) SEO agencies.
Duplicate Content is also probably the single topic of SEO that has the most confusion and, for lack of a better word, bullshit written about it.
The truth is that Duplicate Content is actually quite a simple concept. In fact there is pretty much one decisive rule that encompasses the whole idea of Duplicate Content:
One Piece of Content – One URL – No Exceptions
This is how confusing the results are
Overall the results are quite confusing, but lead me to make some strange, possible conclusions..
- Googlebot CAN execute, and read the results of, AJAX requests
- Googlebot CAN NOT store/read Cookies
But the killer is:
DOES NOT DOES use this information in search relevancy calculations