VMWare 6.7 A Blast From The Past

Or A Blast To Their Market?

I’ve used VMWare on and off over the years but mainly during the pre-opensource days before the days of Virtualbox, KVM, Xen, OpenVZ etc…  and have dabbled and helped maintain some VMWare clusters over the years.

Anyone familiar with VMWare or who Google’s it will see lots of dire warnings about upgrading to the next version since the upgrades often break existing servers.  This is mainly not because of the Linux Kernel but VMWare seems to have a policy of blacklisting and hardcoding what network adapters, ILOs and CPUs are supported in each release.

Indeed the majority of blogs you will find deal exclusively with warnings of what is not supported and how to get around various restrictions.

But 6.7 seems like a marked departure from the standard.  It has dropped support for the majority of CPUs previously supported even up to 6.5.

I’ve also found it be fairly buggy especially getting vSphere working nicely on 6.7 ESXi hosts.

So this brings me to the next point, VMWare has literally shrank their market share but making it so their existing customers or a lot of people who may have used VMWare literally cannot use it (at least not with the latest 6.7 version).  Since there is not a lot of hardware that supports 6.7 the logical solution for many, even existing users is to simply migrate their VMWare VMs to something opensource based on KVM whether that would be Proxmox, oVirt, OpenStack etc…

Now, I do understand VMWare wants to prevent their marketshare and they’ve likely worked out agreements with hardware manufacturers on what gets obsoleted since a lot of large corporate customers will simply just buy brand new hardware that is supported.

But to me it’s just not a green solution when the same “obsolete” hardware is more than capable of supporting large scale computing infrastructure for a long time to come.  Computing power is so affordable and up there today the problem for hardware manufacturers is that so many organizations even with old hardware don’t need to upgrade (of course save for VMWare mandatory hardware obsoletion).

Aside from all of this VMWare is a fairly good system but I feel it is starting to quickly become attractive after reviewing a lot of community feedback and talking to colleagues in the industry.  There’s a huge push to migrate to KVM based virtualization and I feel the latest VMWare 6.7 will hasten this move.

RAID in 2018

Still Not Quite Obsolete

I’ve talked to a lot of professionals in the IT industry and some surprisingly don’t even know what RAID is!  Others think it is unnecessary, while some think RAID is a replacement for backups still (something admins and hardware techs have been harping about for decades now).  First, I’ll give a quick introduction into what RAID is, what it isn’t and its applications in the real world.

RAID stands for Redundant Array of Independent Disks.  I think the term is a little bit unnecessary in todays’ world but let’s break it down.

First of all we are talking about an array of connected, separate hard disk drives.  These could be 2.5″, 3.5″, SAS, SATA or SSD as far as our implementation and OS they are all essentially the same to the computer that they are connected to.

There are 5 levels or versions of RAID as follows:

  1. RAID 0 AKA striping (two drives required).  This takes two identical hard drives and combines their performance and capacities to make what appears to be a single drive.  Performance with 0 is excellent but the disadvantage is that a failure of any single disk will result in dataloss and the array going offline.  There is no recovery except for backups.   I never recommend RAID 0.
  2. RAID 1 AKA mirroring (two drives required).  It is called mirroring because both drives contain an identical copy of the data. Performance is enhanced on reads because data can be read twice as fast but simultaneously reading from the 2 separate hard drives at once.  There is a performance penalty in terms of writing since the data must be written to both drives at once (however this is usually not an issue for most servers since the majority are read intensive on average).
  3. RAID 5 (3 + drives required).  RAID 5 has in the distant past been one of the most common RAIDs as it provides enhanced performance and some redundancy but it is very prone to faults, failures and slow rebuild times.  It uses a parity drive that is essentially spread between the others but this parity often results in performance degradation unless a hardware RAID card is used.    It can withstand a single drive failure but NOT 2 drives.  Performance of reads is good but the parity calculations slow down performance.
  4. RAID 6 (4+ drives required).  Similar to RAID 6 but two drives are used for parity so it could survive 2 drives failing and is more fault tolerant.  It takes even longer to rebuild on RAID 6 than RAID 5. Performance of reads is good but the parity calculations slow down performance.
  5. RAID 10 AKA 1+0 (requires 4 or more drives).  It is a combination of the sum of two RAID 1 arrays, striped together as a RAID 0.  It delivers excellent performance and is fault tolerant (a drive of each RAID 1 could die without any ill effect aside from some performance reduction).  Rebuild times are similar to RAID 1 and are much faster than RAID 5 or 6.

Rather than over complicating this issue I will try to give a practical take in 2018 of what RAID means.  Some have said RAID is obsolete but usually they are referring to the nearly impossible resync or rebuild times on large multi-terabyte RAID 5/6 arrays.  I would agree there as I’ve never liked RAID 5 or 6 and whether you like it or not it is very impractical to use.

So what is the best way to go?

RAID 1 If you only have 2 drives then I think RAID 1 is an excellent trade off.  It is quick and easy to resync/rebuild, a single drive can die and you will still not have any data loss, yet when both are active you have a performance boost in

RAID 10 If you have 4 drives you gain extra performance in a RAID 10 configuration with fault tolerance that a single drive on each RAID 1 could die without dataloss.

The main disadvantage is that with RAID 1 and RAID 10 you are essentially losing 50% of your storage space but since storage/drives are relatively cheap I think it’s been a worthy tradeoff.

There are some people who spout that “drives are more reliable today” and “you don’t need RAID anymore” but I hardly find this true.  I’d actually argue that SSD drives may be more unreliable or unpredictable than mechanical hard drives.  One thing we can all agree on is that the most likely component to fail in a server is a hard disk and that’s not likely to change any time soon as much as we like to believe flash based storage is more reliable.  I’d also ask anyone who thinks running on a single drive (even with backups) that isn’t the performance benefit and redundancy worth running RAID?  I’m sure most datacenter techs and server admins would agree that it is much better to hotswap/replace a disk than it is to deal with downtime and restoring from backups right?

Now for the warnings.  RAID “protection” is NOT a replacement for backups even if nothing ever dies.  The reason for this is to understand the misleading term of RAID “protection” that some in the industry use.  It is true in sense that you are protected from dataloss if a single drive fails (or possibly 2 in some RAID levels).  However this doesn’t take into account natural disasters, theft, accidental or willful deletion or destruction of data.

I’d say as it stands in 2018 and beyond that everyone should be using at least RAID1 or RAID10 if possible in nearly every use case.  There are a few possible exceptions to this rule but they are rare and even then you should aim for as much redundancy as possible.

In conclusion, if you can use RAID 1, preferably RAID 10. If you can’t use RAID, learn and use it anyway.

Cheers!
A.Yasir

FINOM ICO Review and Regrets

This was a very expensive ICO at $2 per token.  It supposedly includes a NOM and FIN coin for each purchase.  So far there haven’t been many real updates about when we’ll receive these tokens.  All I do know at this point is that one of them will be locked for a year.  The cost was very high so I certainly hope this wasn’t a scam designed to collect a large amount of coins (as many ICOs are).  I hope they will go ahead with their projects and that this coin will return well.  Since the price was so high investors now have extremely high expectations over more reasonably priced ICOs.  Can the team deliver or do they even care to?  Time will tell but if my experience with an overpayment is an indication this team may not be honest or trustworthy as is the case with the majority out there unfortunately.

I’m a user of Nanopool and that’s how I found this ICO but I am already having regrets but let’s get into what FINOM claims to be.

They claim they merged the 3 projects into what is called FINOM with Mining (Nanopool), Cryptonit (their own cryptoexchange) and, Tabtrader (Banking).

I know nothing about the other projects but Nanopool works well.  What’s weird and good is that Nanopool seems to be Chinese based since it has an ICP license.

The troubling part is the question is who is really running these projects and company?  It’s not clear, they claim to be from Switzerland yet Nanopool is a Chinese website so there is Chinese ownership.  When looking at who is behind Finom most of them appear to be extremely young people.  The issue for me is why did they register in Switzerland when the entire team seems to have no connection there?  Was it to artificially build trust?  This is concerning because as far as I can tell they all really appear to be based in China and other parts of the world so why try to hide this?

Screenshot-Nanopool - Mozilla Firefox

Why am I upset?

Well first of all I feel silly but frustrated with Ethereum when I was trying to send to this ICO the Ethereum network was “congested” as it often is.  When I was trying to send I kept seeing in the console that the transaction was rejected so I kept trying.  However I didn’t realize the Ethereum wallet doesn’t show pending transactions and may attempt to send more ETH than your account has if you keep sending small amounts that are pending.

I ended up sending more ETH than I intended to them which are of course worth far more now.  I opened a support ticket explaining what had happened only to be ignored after several attempts:

 

FINOM-Scam

After a month of waiting for support I e-mailed the “hello@finom.io” that they encourage you to use on their website but that didn’t work.

Then I started getting SPAM from the owner who appears to be overseas and thought I’d reply directly to the owner.

Finom-KirillSuslov-Support-Scam-Ignoring-Investors

Unfortunately like most crypto projects and companies they are always content to collect your coins, they’ll SPAM you and create an impression that they an honest and community driven project but usually it couldn’t be driven further from the truth.

Experiences like these drive investors like me to not only become frustrated but we become less likely to invest in new projects going forward.

It’s hard to tell skeptics concerned about scams and fraud that it doesn’t happen or to justify that “most ICOs and most crypto players are honest” when in terms of IT, support and communication I haven’t seen anything more arrogant or dishonest than experiences like these.  It’s a world where our IT projects and clients would not tolerate this treatment and it’s only a matter of time before investors vote with their wallets out of both genuine fear and frustration.

 

Ethereum’s Issues Stem From the Basics

Ethereum is certainly #2 in the market capitalization only second to Bitcoin but it doesn’t mean it’s as easy to use.  In fact I suspect my recent experience is what is keeping it from rising, Ethereum makes me nervous and reluctant to use it everyday. As someone who has used the client’s/wallets for both I find Ethereum’s is cumbersome and at times impossible to use, thus preventing the user from using it to do any transactions at all.

Imagine if a simple eTransfer or Wire from your bank took over a week to initiate?  That’s way too long and beyond the purpose of the infamous but in practice non-realtime transactions in the cryptoworld.

I spent nearly a week syncing 4 months of blocks!

I needed to do a transaction in Ethereum and opened up my Ethereum Client which slowed my whole computer down and ultimately wouldn’t update past a certain point.

I consider myself an above average user who is good at troubleshooting issues.

I updated to the latest Ethereum client and that still didn’t fix it.

Some users suggest deleting the “chaindata” folder and that didn’t fix it.

Eventually I decided to do delete the whole “Ethereum Wallet” folder (never do this without backing up your keystore files safely).  Also be aware that this folder “Ethereum Wallet” is not where your keys/wallet data are stored.  In Linux they are stored in “~/.ethereum/keystore”which is very misleading when you have a “~/.config/Ethereum\ Wallet” (which is not where your wallet data or keys are stored).  I stress this because I came across many who had sworn off the Ethereum Coin and team because of this confusion where they lost their keys and ultimately their investment and coins.

The solution was to delete “~/.config/Ethereum\ Wallet” but the fun didn’t stop there.  It was updating so slow through the missing blocks that it felt like I was mining the entire blockchaining (you could literally count 1 by 1 as it was processing or sometimes it would take minutes on a single block).  I’ve been able to sync the whole Bitcoin or Litecoin Blockchain more quickly and without or much impact on my computer.

I decided to switch the chaindata for Ethereum to SSD it did speed things up but not significantly and still took about a day to catch up and my computer still did slow down.

What I Learned About Ethereum

For all of its features I think the team is out of touch with getting the basics right first, as evidenced by the “Parity” fiasco where through no fault of the users people have essentially lost or have 160M worth of Ethereum coins frozen and lost presumably forever.  I have never seen this with another major coin.

Nor have I seen or experienced the confusion on basics of why their client is so complex.  Why does it use another program geth to sync the data?  Why are there so many different choices, fast sync (which didn’t help speed things up for me), a MIST client and so many different confusing and unnecessary choices and complexity?

I like how I can just download the Bitcoin client or Litecoin client and it works simply, there’s no guessing or confusion.

When it comes down to it, if someone with my background is having to troubleshoot and it slows me down from doing transactions, or I fear my coins could randomly be lost it doesn’t bold well for Ethereum’s future.  I don’t mind leaving other wallets running but Ethereum just takes too much computing with SSH so it’s not practical.  I will consider Ethereum a wise investment with some risks I’ve highlighted above but for any cryptocurrency to truly be accepted and successful it must be secure, fast, reliable and easy to use (something which most cryptocurrencies still fail at if not for the reason that you require the whole blockchain to keep your money in your own possession or have to rely on dangerous uninsured third party exchanges or services that are often hacked).

As we can see below this is not a sustainable practice for cryptocurrency going forward and I will be posting more about how I think the future of crypto will be significantly different than we currently see.

Screenshot-Ethereum Wallet-19

Why Don’t I/We Use RHEL Red Hat Enterprise Linux Instead of Centos?

This a question one of our good client’s asked me one day and I have to admit I wasn’t prepared for this one, it’s something we’ve never put active thought in but was rather a matter of instinct. While we do use various OS’s for different platforms including our own in-house Linux, Cloud, Hosting Control Panel, applications and clients including BSD based OS’s such as FreeBSD, this is something we’ve never been asked.

RHEL (Red Hat Enterprise Linux) has been a clear leader since the early days of providing a standardized, mission critical platform for business applications in the Linux/Unix environment. It was actually my first Linux install back when I was in highschool and I’ve personally maintained Unix/Linux systems for over 16 years now. In that time I have found the strengths and weaknesses of RHEL in terms of our business and clients. However, the Centos project is a legal clone of RHEL with the only difference being artwork and the name Centos it functions identically as RHEL and is completely Open Source.

Since our team of experts does everything in-house we don’t really on Vendor support, it means when something is going on with a server we can solve the problem on our own very quickly with our own team. We don’t have the need to call a third party and ask how-to fix the problem and in fact it’s quicker for us to just do it ourselves whereas I’ve learned many organizations heavily rely on this type of third-party support. The goal with my ventures has always been that our teams should be self-sufficient for both security and efficiency.

Centos being Open Source is a huge advantage for us, we can customize and redistribute OS’s and deploy them on servers without having to touch a button or connect to monitor or KVM (yes I do realize RHEL can be installed headless/kickstart but not in the way we deploy our custom OS images-possibly for another blog). The only con with Centos is that there is a small delay in updates since it depends on the upstream source which is RHEL but this is a minor issue and all major updates are done to Centos almost instantly.

To conclude, my hats off to Linus Torvalds for inventing the Linux Kernel, the RHEL team and especially the CentOS team and I hope this explains why CentOS for my company’s is the best fit for our needs at the moment.