cheap uk web hosting free web space hosting

This web site is about cheap uk web hosting, free web space hosting, free hosting, uk web hosting, best web hosting reviews. Bargains and deals on cheap uk web hosting

Saturday, May 10, 2008

Free web page hosting review

Today's web page hosting review Article

The Web Site Reality Check: Taking a Site Inventory

Fri, 28 Mar 2008 12:47:04 GMT
Is your site outdated or in need of a refresher? Here are just a few top signs that your website could be brought into the new century.

Moved from Web24 to Linode

Fri, 02 May 2008 16:28:28 +1000

(Note: This is a review of VPS hosting services provided by Web24 and Linode, and a migration of one of my sites from one to the other at the end of March 2008.)



Web24 — Virtuozzo VPS in Australia



Back in early this year I talked about writing a review on Web24.com.au, which I used to replace a VPS I got from GPLHost, which I terminated last December as I was using too much bandwidth (and was too cheap to pay :) The VPS I got from Web24 was their Silver package, a Virtuzzo Linux VPS running Ubuntu Linux, with 384MB guaranteed memory, over 6+GB of privvmpages (burstable memory), 50GB/month data transfer, and was located in Fujitsu data centre in Melbourne (see their profile on VPSAU). All these for under AUD$50/month — very affordable for my little sites.





Great Network, Great Uptime, Great Customer Service



Other than the Web24-wide connectivity issue back in February, their network has been rock solid. I am getting around 35ms from my home ADSL2+ to my VPS, and have used 30-40GB a month without issue.



My VPS there also had great uptime. It was up for the entire period — almost 120 days without rebooting. It has certainly demonstrated the stability of Virtuozzo. Hardware wise Web24 was using a 2.4Ghz Xeon with 4x physical core and 8GB of physical memory. However, cpulimit of my VPS has been set to 160% so I can only use around 960Mhz of each CPU core (and 4 cores at the same time), which is not bad at all. Web applications are rarely constrained by CPU power anyway.



Customer service wise — top notch, all the way from pre-sale to support. I have sent in a few support tickets and all of them have been answered in timely manner. GPLHost still has a better service though in my opinion because they are not only solving your problems, reporting back their progress, but also following up after the issues have been resolved.



Then There was Performance Issue



The main reason I am moving away from Web24 is performance, or lack of it. As I have said previously that the server CPUs have more than enough power, but my website over there has suffered much degraded performance throughout February and March, due to large amount of I/O wait.



AWstats for site at Web24 First of all, what am I hosting there? A small-medium sized Drupal-based community site with around 5,000-7,000 visits per day, 800+k page views and 7.5 million hits per month. Lots of customisation and optimisation, and lots of fragment caching. Lighty on the front-end with 5x PHP+xcache/FastCGI processes (which is more than enough). MySQL 5 with a fat key buffer and query_cache_size to optimise on read requests. I am also trying to stay way below the memory limitation just to be on the safe side. In fact I usually use no more than 200MB under privvmpages. And I thought I was safe.



Not really.



Unfortunately I am not able to reproduce all the communications between Web24 support and myself (as I have a bad habit of deleting old emails), but most support issues I raised were performance related.



Swapping Swapping and Constantly Swapping



20 December 2007 evening, my VPS suddenly stopped responding. Yes it still responded to ICMP pings. Yes you can still connect to port 80, but PHP/FastCGI backend won’t load. SSH in took forever, and once I was in I checked the loadavg it went skyrocket. Type in free -m and I was shocked to discover that all the swap space has been exhausted. free, buffers and cached are all on single digit. Note that on Virtuozzo 3 UBC this is from meminfo of the host server, and has nothing to do with my own VPS. The host server was breathlessly swapping to cope with memory shortage. I cannot remember how long it lasted but it did resolve itself at the end, probably with OOM and some process slaughtering. It wasn’t pretty.



So in the middle of my panicking, I fired a support request. It was escalated and a few hours later someone responded saying it was a CPU issue rather than memory issue, and was caused by someone else’s VPS getting compromised. The swap space on the host server then got increased from 2GB to 12GB. I was trying to argue that it WAS a memory/swapping issue, and my processes cannot get enough CPU time because everyone was busy paging in and out! Nope. Web24 denied about it. The problem was with the CPU they said, which was definitely not what vmstat told me.



I/O Wait Issue Got Worse



Performance irregularity continued, and got worse around February and March. During peak hours (which for my site is 9am-11pm AEST), the 15-minute load average might go up to 4-5 for a period of 5-10 minutes every now and then, and my site would completely stall. 15-minute load average doesn’t drop back down below 0.5 during peak hours, and simple commands line ls feels sluggish — a sure sign of I/O wait issue. Swap usage on the host server has been ranging from 4GB-6GB (out of 12GB total swap space), and sometimes some of my key processes got swapped out even though I am only using half the guaranteed memory.



So I posted this question at WebHostingTalk (without mentioning who the provider was), and asked for opinions.



The question I would like to ask is — is it possible to pin-point why the VPS is slow? My apps are obviously not CPU bounded, and as iostat is not really working in Virtuozzo I cannot tell whether it is IO-bounded inside my VPS either. Is it possible to find out whether it is due to excessive paging on the physical server, so I can go to my provider and say, “hey you are overselling and you should not pack that many VE into a physical server”.



And the responses I’ve got:




… when a VPS node is using ANY swap at all that’s your first warning sign that the server is being pushed to its extremes.
seankoons at Zone.net



My take: It’s vastly oversold…
TheWiseOne at TekTonic



If 5GB of SWAP is being used, I’d say they are overselling…
devonblzx at Reseller3k




Well. I guess I got te message. Despite claims that “We do not overcommit on our VPS infrastructure”, looks like Web24 has just jammed too many people onto that host node. Or maybe my hypothesis with UBC-based VPS hosting is true — by providing 6+GB of burstable memory, you are basically inviting everyone to use as much as they want, and OOM won’t kick in until swap space has been exhausted — which would be probably too late. In that case they might not be overselling if they calculated with the amount of guaranteed memory, but poor UBC settings can still make you look bad. Not that I am going to be hosted with another UBC-based VPS provider anyway.



Moved to Linode



As traffic to my website has also been slowly growing, I was facing two choices — move up to the next plan, or move elsewhere (again). With the performance issues at Web24, I do not think moving up to the 512MB guaranteed plan will do any wonder. AUD$50/month is all I am willing to pay anyway, which means I won’t be able to find any virtual servers in Australia that will fit in my budget. With over 85% of traffic coming from Down Under, it makes sense to also host my site in Australia. But when budget becomes a constraint — well I guess I can live with a bit of latency.



As I still kept a Xen VPS at Linode after my review in January (yes, I actually became a customer), it is at Fremont CA which is “close enough” to Australia for me, I decided to make a migration from Web24 to Linode at the end of March.



So one evening I changed the DNS TTL to 15 minutes, put up a maintenance notice on my website, copy all the files across (which was less than 200MB), set everything up at Linode, make sure everything works, and update the DNS records to let them propagate. The next morning — traffic as usual, and everything “just works”. You do feel the lag typing inside a SSH session, but you cannot actually tell much difference with page serving. I guess the poor performance at Web24 sort of cancels out the benefit of low latency.



My community site has now been running for a month now at Linode and the performance has been great. In April it has been 8,000+ visitors/day and used around 62GB of data. Linode Platform Manager shows 4% of average CPU utilisation last month, and my Linode 360 uses little swap and always has 200+MB of (free + buffers + cached). 15 minute loadavg rarely goes above 0.1… 200GB/month of data transfer means it will probably last me a while.



Verdict



A few concluding points:




  • Don’t always believe what the support says. Gather your own evidence. Even a vmstat dump can point out roughly where the bottleneck is.



  • Xen >> Virtuozzo. Yes I know Matt is advertising here, but personally I still much prefer Xen than Virtuozzo. At least you get a virtualised block device where iostat can tell you how much I/O you are doing.




  • When you have high load on a Virtuozzo/OpenVZ VPS, adding more memory have no value by itself (despite many have suggested this at WHT). If you are CPU-bound (not likely) — check whether there’s a cpulimit on your VPS and ask your provider to remove it. If you are IO-bound (very likely), then you can go and get more memory PLUS implement aggressive caching throughout the stack so hopefully there is less load on the database, if you are the trouble maker.



    If you are not the one causing the problem, or maybe optimisation is not your cup of tea, then prepare to jump ship.




  • Oversold VPS exists.






Introduction to Web Hosting Automation

Mon, 21 Mar 2005 00:00:00 EST
The objective of automation platforms is to simplify the hosting process to enable providers to handle more customers than they otherwise could, and to help them more efficiently meet the needs of customers.




Keywords: burnout tigbits fast lap

Added: May 8, 2008

Business 101: Pay Your GST Online

Tue, 06 May 2008 22:10:57 +0000
Most people know that they can file and pay their income tax online. It’s a very straightforward process and it sure beats having to go through the physical documents and filling out the individual lines one at a time by hand. Did you know that you don’t need to fill out the physical forms and ...]

Visitor Center Museum

Thu, 17 Apr 2008 23:47:27 -0800

theWHIR.com posted a photo:


Visitor Center Museum



A broader view from inside the museum area. There are, as you can see, quite a few computers.





You can get $50 dollars instant discount on hostican shared hosting plan sign up, by using the Hostican Coupon : BestHosting-12

Uptime Institute Says Power to Cost 300-2250% More Than Server Hardware; What Does This Mean?

Sun, 11 Mar 2007 22:31:00 -0400

I came across Uptime Institute founder Ken Brill's CIO Magazine article via 3tera VP Marketing Bert Armijo's blog.



Ken says while hardware prices are falling, total cost of data center ownership is headed through the roof. 5 years from now, the purchase price for a rack of servers will drop 27.5% from $138K today to just $103K. But while it only takes 15 kilowatts to power that rack right now, the energy requirement will rise to 22 - 170 kilowatts by 2012. It could cost as much as $2.3 million to power/cool $103K worth of gear throughout its 3-year lifespan.



(I'm not sure if this figure includes switches and routers and such. A recent Cisco/APC/Emerson study shows that servers/storage/cooling consume 76% of data center power, with 11% going to networking equipment, 3% lighting, and 10% power conversion losses. If Uptime's calculations didn't take the other 24% into account, Ken's $2.3M becomes over $3M!)



I've been thinking about Ken's stats and trying to understand what they mean. As a point of reference, I was looking at Dell's website, which advertises the 4U PowerEdge 6950 dual core, dual processor Opteron server for about $9K. Is Ken saying that:



(a) This particular machine will cost 27.5% less 5 years from now?



(b) 2012's late model machines will sell for 27.5% less than what's on the market today?



(c) The amount of server hardware that fills up 4U of space will be available for $6500 in 2012?



If we assume he means (c), and we accept Sun's claim that "server performance, power and space efficiencies are improving at up to 40% annually on average, and could double every 2 years", then 4U of space may be able to accommodate not one but 4 servers that each feature 4x more processing power and 4x greater energy efficiency.



In other words, $6,500 could buy you 16x more computing resources than that dual Opteron! If that's the case, you might even be able to afford $1M per rack per year in electricity. But only if you virtualize like crazy. No more leasing data center space per square foot or per rack. No more dedicated servers, either. The average customer won't need 4x more processing power in 5 years, which means you won't be able to justify turning on a whole entire server just for them.



You'd also have to replace hardware early and often. Sun recently announced a refresh service for swapping out your servers at least 3 times over 42 months. At first I thought that sounded wasteful, but if server power efficiency is improving at 40% per year, holding on to old gear might end up costing you more. Again, virtualization would be a must. You wouldn't want customer apps to become attached to machines that will be phased out before long.



Bert from 3tera says changes in data center economics will make it increasingly difficult for enterprise CIOs to justify operating their own facilities. But they won't outsource to traditional colo or dedicated server providers. Instead, he agrees with Cassatt CEO Bill Coleman that in the near-ish future, you'll be "paying for data center horsepower the same way you pay for electricity or gas". I think so too. How about you?



PS - On a somewhat related note, eWeek says Intel will release its "Clovertown" chips today. The quad core processors have a 50 watt thermal envelope, versus 80-120 watts on earlier models. That's a 38-60% drop.



PPS - Also, speaking of the Uptime Institute, check out this SearchDataCenter.com interview on how they've helped The Planet save $10K/month on electricity. The Planet, the article says, is looking to expand beyond Texas into the Midwest.







Microsoft - Windows Marketplace
Everyday Low Prices on  Office Furniture at Office Depot.


cheap web hosting asp

Labels:

0 Comments:

Post a Comment

<< Home