SEO Strategies for 2016 is a language for structured data, created by Bing, Google and Yahoo!, to “create and support a common set of schemas for structured data markup on web pages.”

It essentially allows robots to more quickly and effectively understand what is on your web page.

From an SEO stand point, we can use this structured data to avoid confusion, ie Google or others believing our page is about a different subject, or intended for a different audience.

It also allows us to markup media content, which otherwise would be difficult for spiders to understand, such as videos, and other embedded content.

Manipulation: SEO Tactics

Like anything new in the world of Search, is also open to manipulation.

How do we use for SEO?

We use for SEO by adding structured data to our website in a way that makes us seem more relevant to a niche than we actually are.

Each of the common types has their own unique potential to be manipulated, however the attitude behind them all is simple.

Ask yourself the question:

What characteristics does Google count as a benefit, and how can I make it appear that my website carries these?

For for example, if you wanted to be considered an authority in the SEO industry, you would want to associate yourself with the other “big players” in the industry.

For a personal blog, you would want your author profile associated with sites such as Search Engine Land. This could be done using the “additionalType” property:

“additionalType”: “”,

Maybe you would want to seem like you had an affiliation with a professional industry organisation to give yourself perceived credibility.

So on a personal profile, you could use “memberOf”, or “worksFor”, with the Organization markup of the company you want to leech authority from.

Now obviously you posting this once on your personal blog is not going to influence the knowledge graph, and so this is the point where we need to think creatively.

There needs to be multiple copies of these facts around the web, independently verified by unique sources, that contain the exact or similar relationship explained in your markup.

The basic way the knowledge graph works, is verified facts by analysing Triples.

What are data Triples?

Triples of data in the context of the Google knowledge graph, are the Subject (main entity), Object (related entities), Predicate (relationship between them). So the triple is the main subject, and the relationship it has with the other subjects related to it.

An example of a Triple would be:

Matt (subject) doesn’t like (predicate) SEO (object).

Now we should be thinking like Google in everything we do, so how would we determine whether or not a fact was verified and true, as opposed to being manipulation?

I would say that to be trusted, you need to either be a reasonable authority in the chosen niche, a trusted news outlet, or an authority on the chosen company/person (ie their company/personal profiles or websites).

So when approaching manipulation in a niche, we could try to assess how simple it would be to add content into each of these three areas, to see how simple it would be to manipulate the results.

For example, if a comment section on a business blog isn’t heavily moderated, or a Youtube account has an open comments or discussion section.

Creating an account on the main industry forum could be useful, as well as looking for niche news outlets that allow press releases.

Testing and Implementation

I have setup several tests for this, and the results will be entered here once they are properly analysed.

Have you experimented with manipulation?

Got any results or comments to share?

Let me know in the comments below.