Wastholm.com

CustomizeGoogle is a Firefox extension that enhances Google search results by adding extra information (like links to Yahoo, Ask.com, MSN etc) and removing unwanted information (like ads and spam).

The Assayer is the web's largest catalog of books whose authors have made them available for free. Users can also submit reviews. The site has been around since 2000, and is a particularly good place to find free books about math, science, and computers. If you're looking for old books that have fallen into the public domain, you're more likely to find what you want at Project Gutenberg.

Hur mår din sajt? Gör en snabb SEO-analys

Microsoft has had discussions with News Corp over a plan that would involve the media company being paid to “de-index” its news websites from Google, setting the scene for a search engine battle that could offer a ray of light to the newspaper industry.

Spinn3r is a web service for indexing the blogosphere. We provide raw access to every blog post being published - in real time. We provide the data, and you can focus on building your application, mashup, or search engine. We find the weblogs and RSS, index their content, fetch the links, index their comments, etc.

Our purpose is rather simple. We want to make the internet as open as possible. Currently only a select few corporations have a complete and useful index of the web. Our goal is to change that fact by crawling the web and releasing as much information about its structure and content as possible. We plan on doing this in a manner that will cover our costs (selling our index) and releasing it for free for the benefit of all webmasters. Obviously, this goal has many potential legal, financial, ethical and technical problems. So while we can't promise specific results, we can promise to work hard, share our results, and help make the internet a better and more open space.

Sitemaps are an easy way for webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using the Sitemap protocol does not guarantee that web pages are included in search engines, but provides hints for web crawlers to do a better job of crawling your site.

Rupert Murdoch has said he will try to block Google from using news content from his companies.

The billionaire told Sky News Australia he will explore ways to remove stories from Google's search indexes, including Google News.

Mr Murdoch's News Corp had previously said it would start charging online customers across all its websites.

He believes that search engines cannot legally use headlines and paragraphs of news stories as search results.

Google doesn't force Web sites to be included in its search listings. The people who run any site can remove it from Google's results with a few keystrokes.

...

It's not like this is some big secret. Google even has a page on its Web site explaining step by step how to do it. Yet neither AP nor News Corp. has taken this simple step to stop the marauding Google pirates from pillaging their cargo. Why? Because they know that their traffic would dry up overnight. They'd rather blame someone else for their failure to compete in a changing marketplace. They happily take all the customers Google sends them for free, and then accuse Google of theft. Classy.

|< First   < Previous   31–40 (65)   Next >   Last >|