Monday 29 July 2013

Do You Know - 'Keywords' the Most Important Item in SEO By – Harindra Kumar (Smart SEO)


Keywords are the most important SEO element for every search engine, they are what search strings are matched against. Choosing the right keywords to optimize for is thus the first and most crucial step to a successful SEO campaign. If you fail on this very first step, the road ahead is very bumpy and most likely you will only waste your time and money. There are many ways to determine which keywords to optimize for and usually the final list of them is made after a careful analysis of what the online population is searching for, which keywords have your competitors chosen and above all - which are the keywords that you feel describe your site best.

Keywords in Special Places

Keywords are very important not only as quantity but as quality as well – i.e. if you have more keywords in the page title, the headings, the first paragraphs – this counts more that if you have many keywords at the bottom of the page. The reason is that the URL (and especially the domain name), file names and directory names, the page title, the headings for the separate sections are more important than ordinary text on the page and therefore, all equal, if you have the same keyword density as your competitors but you have keywords in the URL, this will boost your ranking incredibly, especially with Yahoo!.

Keywords in URLs and File Names

The domain name and the whole URL of a site tell a lot about it. The presumption is that if your site is about dogs, you will have “dog”, “dogs”, or “puppy” as part of your domain name. For instance, if your site is mainly about adopting dogs, it is much better to name your dog site “dog-adopt.net” than “animal-care.org”, for example, because in the first case you have two major keywords in the URL, while in the second one you have no more than one potential minor keyword.

When hunting for keyword rich domain names, don't get greedy. While from a SEO point of view it is better to have 5 keywords in the URL, just imagine how long and difficult to memorize the URL will be. So you need to strike a balance between the keywords in the URL and site usability, which says that more than 3 words in the URL is a way too much.

Probably you will not be able to come on your own with tons of good suggestions. Additionally, even if you manage to think of a couple of good domain names, they might be already taken. In such cases tools like the Tool below can come very handy.

File names and directory names are also important. Often search engines will give preference to pages that have a keyword in the file name. For instance http://mydomain.com/dog-adopt.html is not as good as http://dog-adopt.net/dog-adopt.html but is certainly better than http://mydomain.com/animal-care.html. The advantage of keywords in file names over keywords in URLs is that they are easier to change, if you decide to move to another niche, for example.


Keywords in Page Titles

The page title is another special place because the contents of the <title> tag usually gets displayed in most search engines, (including Google). While it is not mandatory per the HTML specification to write something in the <title> tag (i.e. you can leave it empty and the title bar of the browser will read “Untitled Document” or similar), for SEO purposes you may not want to leave the <title> tag empty; instead, you'd better write the the page title in it.

Unlike URLs, with page titles you can get wordy. If we go on with the dog example, the <title> tag of the home page for the http://dog-adopt.netcan include something like this: <title>Adopt a Dog – Save a Life and Bring Joy to Your Home</title>, <title>Everything You Need to Know About Adopting a Dog</title> or even longer.

Keywords in Headings
Normally headings separate paragraphs into related subtopics and from a literary point of view, it may be pointless to have a heading after every other paragraph but from SEO point of view it is extremely good to have as many headings on a page as possible, especially if they have the keywords in them.

There are no technical length limits for the contents of the <h1>, <h2>, <h3>, ... <hn> tags but common sense says that too long headings are bad for page readability. So, like with URLs, you need to be wise with the length of headings. Another issue you need to consider is how the heading will be displayed. If it is Heading 1 (<h1>), generally this means larger font size and in this case it is recommendable to have less than 7-8 words in the heading, otherwise it might spread on 2 or 3 lines, which is not good and if you can avoid it – do it.

Choosing the Right Keywords to Optimize For

It seems that the time when you could easily top the results for a one-word search string is centuries ago. Now, when the Web is so densely populated with sites, it is next to impossible to achieve constant top ratings for a one-word search string. Achieving constant top ratings for two-word or three-word search strings is a more realistic goal.

For instance, If you have a site about dogs, do NOT try and optimize for the keyword "dog" or "dogs". Instead you could try and focus on keywords like "dog obedience training", "small dog breeds", "homemade dog food", "dog food recipes" etc. Success for very popular one-two word keywords is very difficult and often not worth the trouble, it's best to focus on less competitive highly specific keywords.

The first thing you need to do is come up with keywords that describe the content of your website. Ideally, you know your users well and can correctly guess what search strings they are likely to use to search for you. You can also try the Website Keyword Suggestions Tool below to come up with an initial list of keywords. Run your inital list of keywords by the Google keyword Suggestion tool, you'll get a related list of keywords, shortlist a couple of keywords that seem relevent and have a decent global search volume.

Keyword Density

After you have chosen the keywords that describe your site and are supposedly of interest to your users, the next step is to make your site keyword-rich and to have good keyword density for your target keywords. Keyword density although no longer a very important factor in SEO is a common measure of how relevant a page is. Generally, the idea is that the higher the keyword density, the more relevant to the search string a page is. The recommended density is 3-7% for the major 2 or 3 keywords and 1-2% for minor keywords. Try the Keyword Density Checker below to determine the keyword density of your website.



Although there are no strict rules, try optimizing for a reasonable number of keywords – 5 or 10 is OK. If you attempt to optimize for a list of 300, you will soon see that it is just not possible to have a good keyword density for more than a few keywords, without making the text sound artificial and stuffed with keywords. And what is worse, there are severe penalties (including ban from the search engine) for keyword stuffing because this is considered an unethical practice that tries to manipulate search results
.


For more details visit here - Get Smart SEO Tips

Friday 26 July 2013

Left The Confusion About That Without SEO Any Website Give It's Well Or Not? By Harindra Kumar

I shouldn’t take the bait — Robert Scoble’s latest missive that SEO isn’t important. But sometimes I can’t help myself for wanting to provide some perspective. I’ve covered the space going on 14 years now. I’ve heard the SEO is dead spiel over and over and over again. I feel like a revisit to the first major prediction of this back in 1997 is in order. Somehow, it has survived since then.

In that year, the Online Advertising Discussion List was one of the primary ways that internet marketers communicated with each other about trends and tactics. We didn’t have forums. We didn’t have Twitter. We didn’t have AdWords. And we walked eight miles through the snow to even use a search engine.
Richard Hoy posted this to the list in November 1997. I’ll bold the key part, as well as key parts in other quotes further below:

I’m beginning to believe that search engines are a dead-end technology and fretting over where your site comes up is a big waste of time. I’m now advising clients that we create good META tags, submit the site and then forget it.
I base this newfound philosophy on a couple of things. First, I’ve noticed on the sites we manage that the percent of traffic from search engines drops as the investment in other types promotion increase.
For example, The Year 2000 Information Center ( http://www.year2000.com/ ), a site we own and promote heavily through PR and co-promotional arrangements, had 6% of its traffic come from search engines last month. 94% came from sources such as online articles, co-promotion, and people using a bookmark.
I see the exact opposite situation in the traffic reports of sites that we do little promotion for. The bulk of their traffic comes from search engines. And that makes perfect sense because without promotion search engines are the only way people can find these sites…
How can such an unstable system survive? Moreover, how can you ever hope to be on top of it for long?
So in closing, I submit that search engines are dying. In fact, I would say they are dead already and just don’t know it yet – gone the way of the reciprocal link exchange and the “you have a cool page” award as an effective promotional tool. A victim of their own success.
Now compare that to what Robert Scoble wrote today in his “2010: the year SEO isn’t important anymore?” 
I came away from this conversations thinking that SEO is getting dramatically less important and that SEM should be renamed to “OM” for “Online Marketing” since small businesses need to take a much more holistic approach to marketing than just worrying about search results.
So just over 12 years ago, we had someone saying pretty much the same thing that Scoble wrote today. You shouldn’t worrying about search, or that you should certainly be doing more than search.
Let’s Get Another  Important  Fact About  SEO  History 
An important aspect of Search Engine Optimization is making your website easy for both users and search engine robots to understand. Although search engines have become increasingly sophisticated, in many ways they still can't see and understand a web page the same way a human does. SEO helps the engines figure out what each page is about, and how it may be useful for users.
Of course, SEO is not the only solution for getting tons and tons of traffic to your site, but without a doubt, SEO is the most effective one. Why am I saying that? Well…let’s just take things as they are: You for example get traffic from a website with related content to yours. The traffic which comes from that website is pointing to the impression of your post on that writer who linked to your page which means that user may or may not like what it will find there so as a result the user may or may not come back to your site.

What is SEO, Exactly?

The goal of foundational SEO isn't to cheat or "game" the search engines. The purpose of SEO is to:
·         Create a great, seamless user experience.
·         Communicate to the search engines your intentions so they can recommend your website for relevant searches.
Search engine optimization has changed significantly since the earlier days when the term was first coined and industry leaders are beginning to hint at a fundamental philosophical shift that would effectively render the traditional SEO as a dead or dying craft. It is time to re-imagine what it means to manage search engine rankings.

Some history to put this into context: Since it’s inception, SEO has been tactical and reactive by nature. Optimizers would determine what a search engine uses to qualify a site and find the more efficient means by which to satisfy that requirement in order to perform well in a search engine’s search results. The tactics employed by practitioners have evolved over time, reflecting an evolving coyote-versus-roadrunner game in which marketers try to reverse engineer the ranking algorithms of popular search engines like Google and Bing, in order to make their website more favored and thus higher ranked by the search engines.

In the earliest days, search engines relied heavily on webmasters’ use of HTML meta tags to identify keywords related to the content of each page on the site. A search engine would then prioritize rankings based on characters such as keyword density (the number of occurrences of the keyword on the page) in order to determine ranking order. When Google was introduced in 2001, it revolutionized search engine relevance by looking at inbound links to determine quality and significance of a document. The concept was modeled on the academic notion that the number and quality of the citations for an article was a good measure of the article's significance.

This was an important step forward because webmasters were already gaming the search rankings through a method known as "keyword stuffing." A site would place as many as a hundred repetitions of the same keyword at the bottom of the page and make it the color of the background, so users would never see it but the engines would.

Eventually, the emphasis of SEO shifted from on-site content, to the offsite effort of link building. In the beginning, webmasters would simply maintain a "links" page somewhere on their site and trade reciprocal links (I’ll link to you if you link to me). Google figured this one out, and the practice became more complex with link building services offering three-way reciprocal linking, a method that was a degree more sophisticated and couldn’t be detected, for the moment. And as the search engines became more savvy as to the quality of links, the tactics continued to evolve and cottage industry services began to emerge to service the demand for increasingly sophisticated link network implementations.

The tactics have continued to evolve and become more complex since then, as search engines have become increasingly able to debunk efforts to manipulate or influence rankings. In 2009, Google released an update called Vince that marked a significant philosophical shift toward biasing large and well-known brands in the search result. Later that same year, releases followed that enabled the search engine to begin factoring user behavior as an indication of quality of a site, such as how long a visitor would stay on a referred website before returning to the search results. In 2010, the search engine began factoring in social signals, looking at how frequently a website is mentioned in the social sphere. All of these new criteria have set the stage for increased scrutiny of websites based on offline reputation and what end users actually think of the websites. Collectively, these efforts signaled a move in favor of overall long-term brand reputation and user preference, and away from the tactical methods that had been used and gamed so pervasively up to this point.

And then the storm came, as Google began rolling out more frequent and more aggressive updates that both strengthened its search engine's ability to both detect quality signals beyond simply looking at content and links, as well as taking dramatic steps to reign in quality of those criteria. In 2011, Google’s first Panda update was released, which made sweeping changes to the search results, wiping out more than 12 percent of its index, due to perceived low quality content. Numerous releases followed. Then, in 2012, Google's Penguin updates began discounting the sophisticated inbound link structures that have been built.
Today, it is not uncommon to hear about online businesses that have built successful online media websites that have done well for years, but then suddenly see a loss of half of their traffic overnight. In many cases, these businesses thought they were playing by the rules, but have ignored one important point: Their entire business is predicated upon ranking well in the Google search results, and outside of Google, they oftentimes do not exist. By Google’s new definition of quality, this premise positions the website as probable spam that should be removed from its index.


For this reason, the zeitgeist of the SEO world has recently started to make a fundamental philosophical shift. Until now, the craft of SEO has been markedly tactical and reactive in nature--just figure out what the search engines want and adapt to it. But thought leaders in the space have begun hinting that tactical reaction isn’t going to work much longer. In fact, it may already have become cost-ineffective for many businesses. For this reason, online businesses need to begin thinking beyond search rankings now. What is going to work in the future will be the traditional business and brand building efforts that have been the foundation of building a business for centuries.

If any Query please write here, I'll give you possible answers.


For more details visit here - Get Smart SEO Tips

One Fact - SEO Is Not Dead And Will Never Die (By Harindra Kumar) :

Neal Cabage is only the most recent writer to use the words “SEO is dead” in a headline. His article joins a long line of other articles and even a tongue-in-cheek website dedicated to this topic. There are also many well-worded refutations of the idea that SEO is dead, of which Danny Sullivan’s 2009 post is a prime example. Most articles with that headline or a variation thereof are actually writers trying to stir up some controversy so they can then explain why SEO in fact is not dead. Whatever the reason, the phrase has become cliché and tired, and I say that with this post, let us never see the words “SEO” and “dead” in a headline ever again. Let’s move on. But first, this final explanation of why SEO is not dead, and will never die.
SEO, an acronym for search engine optimization, is broadly defined as including any activity or set of activities designed to get business from the organic or natural search results in a search engine. If you change the title tag on your homepage in the hope it will cause your website to rank better on Google, you’re doing SEO. If you add a blog to your website because you heard Google likes content, and you blog every week because you hope this will get your website ranking higher for more terms, you’re doing SEO. If you convince a friend who works at a reputable online publication to write an article about your company and link to your company’s website, you’re doing SEO.
The Death Of SEO: The Rise Of Social, PR, And Real ContentWhat Will Your SEO Look Like In 2014? (The Experts Weigh In)

 There are ways to get business from search engines that are not generally defined as having anything to do with SEO. Google and other search engines sell ad space alongside their organic search results, and buying these ads is not SEO, although the information gained from running these ad campaigns can often be beneficial to one’s SEO efforts.

What could kill SEO?
SEO will die only as soon as the search engine dies. As long as there are search engines people will figure out how search engines work in order to get business from them. We might talk about the end of search as we know it, or how content marketing is changing what SEO is, that SEO and public relations are merging, or that use of the acronym SEO will die out and instead we’ll make those activities formally known as SEO part of a larger group of activities that we’ll call “online marketing” or “web marketing” or something fancier sounding. The fact remains we’ll still be performing activities designed to get business from the natural search results in search engines, and therefore SEO will be alive and kickin’.

Why claim SEO is dead?
If SEO will never die, then why do people claim it’s dead, or even bring up the matter in the first place? As they say, follow the money. Sure, go ahead, lump me in the group of those trolling for traffic by using the phrase. But somebody’s got to put an end to this, and I can’t very well do that without mentioning what I’m trying to put an end to.

Perhaps nothing will do as good a job of putting a final nail in the coffin of the “SEO is dead” mantra as spreading the painfully accurate “Death of SEO” Get Smart SEO Tips provided by SEO Tips. And now, let us never speak of this again.


For more details visit here - Get Smart SEO Tips

Sunday 21 July 2013

Hey Guys Good News : Google doesn’t deny any ‘Negative SEO’ tactics in recent update


Google recently updated the wording around its Webmaster Tools “Can Competitors harm ranking?” help page, leaving some to wonder if brands can use negative SEO to harm their competitors online. If this is the case, publishers will need to be more vigilant about their search marketing strategies to verify that competitors aren’t damaging their online visibility.

The search engine previously asserted that websites’ search rankings couldn’t be damaged if competitors used black hat techniques against them.
Search Engine Watch reports Google’s initial iteration of the site stated: “There’s nothing a competitor can do to harm your ranking or have your site removed from our index. Last year, it qualified the information by announcing there is “almost nothing a competitor can do.”

The new guidance leaves the claim out, and instead advises companies to take action if they think their rankings are being impacted by competitors’ practices. Google explains that while “[It] works hard to prevent other webmasters from being able to harm your ranking or have your site removed from our index,” implying brands might have to protect themselves from harmful practices at the end of the day. 


Search Engineer Matt Cutts is featured in a video on the updated page, telling marketers to report spam content they find on the web as a way to police sites using black hat techniques to outrank brands that employ SEO content best practices. Relevant, high-quality web content outweighs SEO fads in the long run, Cutts elaborates, so it behooves marketers to build strategies that withstand the test of time. 

In a recent blog post, Brafton listed six ways for marketers to improve their link profiles if they are suffering at the hands of the latest Penguin update or malevolent backlinks from competitors. Best practices like building new landing pages, publishing fresh content on a daily basis and growing a social media presence are ways to enhance search performance organically. 



For more details visit here - Get Smart SEO Tips



Friday 19 July 2013

GET IT : WHERE TO FIND THE LATEST GOOGLE NEWS AND UPDATES


Google has come to be known as the online brain but this does not mean that it gives you everything on a silver platter. You still need to know how to go about finding the content you need. Other than the content, the giant company has news and updates and these are released every now and then and most people who use Google products and services need to find a way of keeping up with the updates. The good thing is that you can very easily keep track of them. This article describes some of the best places you can get the latest Google news and updates.

The official Google blog
This is one of the surest places where you will find accurate and up to date news and updates relating to any matter Google. This is because the blog was created to create a link between Google users and the firm hence all the necessary information will be found here. The best thing about this source is that all the information put therein is done by Google personnel hence it is accurate. It is very easy to get news and updates from sources that rely on rumours and this is what makes the Google official blog ideal as a source. In addition, the blog allows you to ask questions and discuss with other bloggers about the content meaning that you will be well informed.

Online press release sites
Online marketing has taken a different turn and most major businesses are finding themselves using online marketing strategies to stay ahead of the competition. One of the ways that this is done is by releasing any worthy news in form of online press releases for bloggers to read. With this said, all you need to do is be aware of two or three press release sites and search for latest Google news and you will most definitely come across the latest release. When looking for the latest Google news and updates using this method, you want to be careful to use a well updated site to make sure you have the latest information.

Google newsletters
Still on the note of online marketing, the use of online newsletters has gone high and this is another channel that you can use to get all the information you need. All you need to do is visit the official Google website and subscribe to receive the newsletter. The good thing with the newsletters is that they are sent to your email on a regular basis so you will not be passed by any relevant information. In addition, the newsletters are done by Google personnel hence have 100% degree of accuracy.

Using Google
This is the last and simplest way to get the updates. The logic behind this method is that Google has a search engine so all you have to do is search for the latest Google news and updates and you will get several sites with information on the same. The only thing is to know which sites give accurate information and this can be done by checking the frequency of site updates.


For more details visit here - Get Smart SEO Tips

Wednesday 17 July 2013

What's new in SEO?


SEO is always new, because it's evolving every single day. This is a very dynamic field, which will keep you on your toes 24*7. SEO is the quintessential example of - 'Change is the only constant'
Success on SEO is dependent significantly on the Search Engine's since the whole idea is to optimize the website for them. Hence any changes in the Search Engine dynamics, immediately impacts the SEO community. Techniques used in SEO are continuously being refined and re-refined, some old ones are discarded, and few new ones are included. Bottom-line being that SEO never stagnates.

It's a field for the learner, because in SEO learning never ends. There's always something new, something better to learn and explore. A good SEO analyst must evolve right along with the search engines.

Scope of SEO, SEO as a Career Option

If you want to choose SEO as a career option or you want to know more about thisfield, you are at the right place... Go ahead, read below to get all your queries answered...

Track back SEO

Let's start with a bit of history, to get a better idea of how this field emerged. SEO is a fairly new field, roughly around 15-20 years old!!
Internet has grown enormously from its advent, as the reach of World Wide Web stretched so did its users. Search Engines made internet simpler for users and complicated for companies; because Search Engines made it important for every companies to be on top of the search engines results page. Till around the mid-90s, it was the job of the webmasters to optimize web sites so that they showed up high in the search engine results pages
Slowly the importance of being high up on the search engine results page started creeping upwards. And as its importance increased, so did the ideas associated with it. And these finally lead to the creation of a whole new field, christened the SEO (Search Engine Optimization)

SEO & Money Capers?

Sure SEO does earn you a lot. You can see the living proof of SEO Analyst's success in form of names include, Jill Wahlen from High Rankings, Rand Fishkin from SEO Moz and many others.
SEO is a profession that offers freedom; you can either work for an organization, or start up your own venture. If you are working for an organization, with little experience you can easily expect an international salary of around 30,000$ p.a. With time and experience, earning 1, 00,000$ p.a is not unheard for a SEO Analyst. And if you want to start your own venture then sky is the limit for earning with SEO.



For more details visit here - Get Smart SEO Tips

Sunday 14 July 2013

Welcome Guest Article and Blogging Tips only a writer would know

Part of creating good SEO is having a blog on your website, and part of having a blog on your website is having great content on that blog. This is where SEO writers come into play. Writers of course don’t have to be focused on SEO (that can be your job), but most writers who offer guest articles to a blog know a thing or two about the process. They know what it means to use keywords naturally, they know how important link-ability is, and they know how to analyze a website and decide if it’s quality in the eyes of Google.

To make a long story short, writers know a thing or two about creating a blog that appeals to great online writers. You have your cla
ssic tips—ask for pitches first, never make too many edits, have a “guest post” tab, etc.—but writers can offer you a few tips that are less well-known.

From a Writer: A Few Unique Thoughts for Websites welcome Guest Articles

Understanding the ins and outs of your business means understanding your audience. When it comes to accepting guest articles on your website, the potential writers are essentially your audience. You can offer them a great backlink, but that’s no long something special anymore (every company has caught on to why that’s important).

You want to create a great place for your contributors, and that means listening to their observations. Below are some of the things that writers notice when it comes time to guest post on a website:

There is nothing worse than writing for a site that doesn’t allow you to subscribe to comments.
Subscribing to comments on a post that you’ve written is great because you don’t have to keep checking back to see if someone commented. It shows up right in your inbox, and you can be sure to you’re replying to everything, even months later. It’s good for the website, the writer, and those commenting, so it’s worth the little bit of extra time it takes to make it happen.

Telling a writer about other sites that you own is like hitting the jackpot. 
If you own more than one website, mention that to writers that ask to contribute to your site. In many cases, the writer will be able to contribute to that site as well, so it’s a win-win situation.

Telling a writer you are not going to accept an article is much better than just ignoring the issue.
It seems that sometimes editors don’t think that writers want their inboxes full of “no thank you,” so it’s better to avoid sending an email altogether. This is certainly not the case. Even if you want to be very short with a writer, it’s better to get something than nothing at all. This will help the writer stop wondering what’s going on with a particular article and get them out of limbo.

If you don’t have an actual email address on your contact page, a writer might move to a new site.
There are absolutely pros to not putting a physical email address on your contact page (spammers, for one), but in some cases this isn’t ideal for writers. Many writers like to write articles first and then try and send the article to a website. Websites are often more responsive when this happens because they know that you’re serious. You don’t have to sit there and wait to maybe get a response from a random contact page that you filled out (and therefore have no record of in your inbox). It’s just not ideal.

So what’s the moral of the story for small businesses? If you want to accept guest articles on your blog, keeping these things in mind will give you a better chance of creating a good relationship with your writers. This doesn’t mean that you have to make your guest posting process easy, but just be aware of some of the things that will help writers be more successful (and ultimately help you become more successful).

Are you an online writer who has a few tips for small businesses looking to accept guest posts? Let us know your thoughts in the comments below.

For more details visit here - Get Smart SEO Tips

Saturday 13 July 2013

Google Penguin 2.0 : Get to Now what for SEO?

As usual, when there is a Google algorithm update business owners have to stop whatever it is they were doing and focus on the update. A penguin or panda update can mean big changes for a website, so you have to always be prepared, informed, and ready to react and recover.  Google is constantly making algorithm changes in order to improve the results that show up on a SERP, and whether you agree with the change or not, Google is the one that gets to decide if you’re on page 1 or page 10.
The algorithm update occurred on May 22, but that doesn’t mean the ripple effects are over. This was a major update that affected about 2.3 percent of English-US queries. Companies need to ask themselves: What comes next for my SEO efforts now that Penguin 2.0 is official?
Penguin 2.0 and How It Affected SEO
It’s first important to understand what a penguin update even means. While Google often comes up with various categories of algorithm updates, there are two updates that are the most popular: Penguin and Panda. In general, a panda update refers typically to quality content, whereas a Penguin update refers to anything regarding webspam.
What’s unique about this update: This is the first Penguin update to actually change the algorithm as opposed to just refreshing it.
This particular Penguin update focused on a few different areas of webspam including:
  • Authority- This is probably the biggest change you’ll see with the algorithm (and the first of its kind). Google wants to help sites become an authority in their niche, and then wants to rank those sites higher. One of the easiest ways to become an authority is through the use of Author Rank, which you can learn about this Other ways to position your company as an authority include social sharing, rich snippets, and Google+ activity.
  • Anchor Text- With this latest update as with previous Penguin updates, Google is looking closely at the anchor text being used to link to your website. If the majority of the anchor text is linking to you is keyword rich, Google will probably find that very unnatural. Google will go down to the keyword level and lower your rankings (not impacting your whole website). The good news is that with a little cleanup and by diversifying your anchor text with a focus on branded variations, the impact can be reversed over time.
  • Advertorials- An advertorial is a place where you find links that were clearly sold and not earned by any given company. Google does not tolerate selling links because that does not ensure successful Google SERPs (it’s almost like trying to trick the Google bots), so the new algorithm change has made this a priority. With the new change, any website paying for content placements (sponsored posts as well as blog posts) are going to need to include a disclaimer and/or use only nofollow links.
  • Hacked CMS. More and more websites are getting hacked because the content management system (CMS) hasn’t been updated or isn’t secure enough. This typically penalizes a website, which affects rankings, whether it was the fault of the Webmaster or not. The new algorithm changes are putting a focus on this to make things fair and give Webmasters a certain amount of time to fix the problem without being penalized.
Aside from these three changes, other past Penguin considerations are only going to get stricter with the Google bots. This includes common black hat tactics such as doorway pages and keyword stuffing as well as link spam.
How Businesses Can Recover from Penguin 2.0
Recovering from Penguin 2.0 is all about positioning your website as an authority. If your website is using advertorials or black hat tactics, stop right away and work to earn links organically. It may take a little bit longer, but it’s worth it in the end. When it comes to links, determine which ones are low quality or hire a professional to give you a link audit. Get those links removed, perform disavow requests, and submit a reconsideration request to Google if necessary.
As far as making yourself an authority in your niche, it’s all about authorship and social shares. Make sure you have used the authorship tag and make sure that you are constantly tweeting and sharing your articles and creating relationships with others who will do the same.

Did Penguin 2.0 affect your business? What changes did you see, and what are you doing to recover? Let us know your story at Get Smart SEO Tips.

Tuesday 9 July 2013

Common Mistakes On Web Development To Make Any SEO Facepalm


Having started off as a web developer myself, I can totally empathise with a lot of issues that affect search engine optimisation on existing sites. A web developer has the complex task of building a website based on the materials they are provided. I always use the analogy that a web developer is like a builder of houses. Once they finish the house they are building they move off onto the next one and are not really concerned with what happens after. This is often the case, particularly in high turn around environments that many smaller companies and agencies find themselves in. Unfortunately, this leaves the occupant of the house standing in the doorway looking around and waiting for the traffic.

Multiple Versions of the Homepage

There should only ever be one version of the home page, e.g. http://www.safaripetworld.com and this should be the only one used throughout the site when linking internally and externally. It may come as a surprise to many people but even adding a forward slash (“/”) to the end of a URL is different to its equivalent without!

Here are some examples of common home page duplicates:

Non-www – http://safaripetworld.com
HTTPS – https://www.safaripetworld.com
File names – http://www.safaripetworld.com/index.php

Worst Case Scenario
In the worst case scenario we could have 8 versions of the home page:
http://www.safaripetworld.com
http://safaripetworld.com
http://www.safaripetworld.com/index.php
https://www.safaripetworld.com
http://safaripetworld.com/index.php
https://safaripetworld.com
http://safaripetworld.com/index.php
https://www.safaripetworld.com/index.php




Solutions =>

The quickest and easiest is to implement the canonical tag in the <head> section of the page. This would look like:
<link href=”http://www.safaripetworld.com” rel=”canonical”/>
This would let search engines know which is the one and only canonical version.


Get Smart SEO Tips





Monday 8 July 2013

You’re Dead… If You Don’t Integrate Social Media & PR Into Your Local SEO Strategy


The peppered moth (Biston betularia), once white with black spots, faced a strange challenge in London during the Industrial Revolution. Buildings and trees, stained with soot, turned black. The light colored moths could no longer hide against this backdrop, and were eaten up by birds. But some peppered moths survived -- by turning black themselves! To survive, you must adapt and change with your environment. Otherwise, you're dead. That's the powerful lesson business owners, marketers and entrepreneurs can learn from a humble moth.

Google’s PR Problem Is Bing’s Opportunity -

Google has a PR problem. No, I don’t mean PageRank. I’m talking about the original definition of PR – Public Relations. And, it’s maybe less Public Relations than it is Webmaster Relations. You see, Google hasn't done a good job of balancing content about problems with content about successes and improvements. They haven't really needed to -- with 70% market share of search, there wasn't a competitor in sight. But, as Bing gains market share (both in general and through their partnerships with FaceBook and Siri), Google is going to need to pay more attention to their communicational strategy. Years ago, webmasters had no way to communicate with Google. They could analyze logfiles and write their own little programs to apply fixes to their sites, but there was no opportunity to get data from Google directly.
Smart SEO





For More Details Please Click Here - Get Smart SEO Tips

Thursday 4 July 2013

How Google Works : Please Don't Think as Googlebot is a Toy.


Google runs on a distributed network of thousands of low-cost computers and can therefore carry out fast parallel processing. Parallel processing is a method of computation in which many calculations can be performed simultaneously, significantly speeding up data processing. Google has three distinct parts:
Googlebot, a web crawler that finds and fetches web pages.
The indexer that sorts every word on every page and stores the resulting index of words in a huge database.

The query processor, which compares your search query to the index and recommends the documents that it considers most relevant.
Let’s take a closer look at each part.

1.  Googlebot, Google’s Web Crawler

Googlebot is Google’s web crawling robot, which finds and retrieves pages on the web and hands them off to the Google indexer. It’s easy to imagine Googlebot as a little spider scurrying across the strands of cyberspace, but in reality Googlebot doesn’t traverse the web at all. It functions much like your web browser, by sending a request to a web server for a web page, downloading the entire page, then handing it off to Google’s indexer.

Googlebot consists of many computers requesting and fetching pages much more quickly than you can with your web browser. In fact, Googlebot can request thousands of different pages simultaneously. To avoid overwhelming web servers, or crowding out requests from human users, Googlebot deliberately makes requests of each individual web server more slowly than it’s capable of doing.

Unfortunately, spammers figured out how to create automated bots that bombarded the add URL form with millions of URLs pointing to commercial propaganda. Google rejects those URLs submitted through its Add URL form that it suspects are trying to deceive users by employing tactics such as including hidden text or links on a page, stuffing a page with irrelevant words, cloaking (aka bait and switch), using sneaky redirects, creating doorways, domains, or sub-domains with substantially similar content, sending automated queries to Google, and linking to bad neighbors. So now the Add URL form also has a test: it displays some squiggly letters designed to fool automated “letter-guessers”; it asks you to enter the letters you see — something like an eye-chart test to stop spambots.

When Googlebot fetches a page, it culls all the links appearing on the page and adds them to a queue for subsequent crawling. Googlebot tends to encounter little spam because most web authors link only to what they believe are high-quality pages. By harvesting links from every page it encounters, Googlebot can quickly build a list of links that can cover broad reaches of the web. This technique, known as deep crawling, also allows Googlebot to probe deep within individual sites. Because of their massive scale, deep crawls can reach almost every page in the web. Because the web is vast, this can take some time, so some pages may be crawled only once a month.

Although its function is simple, Googlebot must be programmed to handle several challenges. First, since Googlebot sends out simultaneous requests for thousands of pages, the queue of “visit soon” URLs must be constantly examined and compared with URLs already in Google’s index. Duplicates in the queue must be eliminated to prevent Googlebot from fetching the same page again. Googlebot must determine how often to revisit a page. On the one hand, it’s a waste of resources to re-index an unchanged page. On the other hand, Google wants to re-index changed pages to deliver up-to-date results.
To keep the index current, Google continuously recrawls popular frequently changing web pages at a rate roughly proportional to how often the pages change. Such crawls keep an index current and are known as fresh crawls. Newspaper pages are downloaded daily, pages with stock quotes are downloaded much more frequently. Of course, fresh crawls return fewer pages than the deep crawl. The combination of the two types of crawls allows Google to both make efficient use of its resources and keep its index reasonably current.

2. Google’s Indexer

Googlebot gives the indexer the full text of the pages it finds. These pages are stored in Google’s index database. This index is sorted alphabetically by search term, with each index entry storing a list of documents in which the term appears and the location within the text where it occurs. This data structure allows rapid access to documents that contain user query terms.

3. Google’s Query Processor

The query processor has several parts, including the user interface (search box), the “engine” that evaluates queries and matches them to relevant documents, and the results formatter.

PageRank is Google’s system for ranking web pages. A page with a higher PageRank is deemed more important and is more likely to be listed above a page with a lower PageRank.

Google considers over a hundred factors in computing a PageRank and determining which documents are most relevant to a query, including the popularity of the page, the position and size of the search terms within the page, and the proximity of the search terms to one another on the page. A patent application discusses other factors that Google considers when ranking a page. Visit SEOmoz.org’s report for an interpretation of the concepts and the practical applications contained in Google’s patent application.

Google also applies machine-learning techniques to improve its performance automatically by learning relationships and associations within the stored data. For example, the spelling-correcting systemuses such techniques to figure out likely alternative spellings. Google closely guards the formulas it uses to calculate relevance; they’re tweaked to improve quality and performance, and to outwit the latest devious techniques used by spammers.

Indexing the full text of the web allows Google to go beyond simply matching single search terms. Google gives more priority to pages that have search terms near each other and in the same order as the query. Google can also match multi-word phrases and sentences. Since Google indexes HTML code in addition to the text on the page, users can restrict searches on the basis of where query words appear, e.g., in the title, in the URL, in the body, and in links to the page, options offered by Google’s Advanced Search Form andUsing Search Operators (Advanced Operators). 


For More Details Please Click Here - Get Smart SEO Tips