VMWare 6.7 A Blast From The Past

Or A Blast To Their Market?

I’ve used VMWare on and off over the years but mainly during the pre-opensource days before the days of Virtualbox, KVM, Xen, OpenVZ etc…  and have dabbled and helped maintain some VMWare clusters over the years.

Anyone familiar with VMWare or who Google’s it will see lots of dire warnings about upgrading to the next version since the upgrades often break existing servers.  This is mainly not because of the Linux Kernel but VMWare seems to have a policy of blacklisting and hardcoding what network adapters, ILOs and CPUs are supported in each release.

Indeed the majority of blogs you will find deal exclusively with warnings of what is not supported and how to get around various restrictions.

But 6.7 seems like a marked departure from the standard.  It has dropped support for the majority of CPUs previously supported even up to 6.5.

I’ve also found it be fairly buggy especially getting vSphere working nicely on 6.7 ESXi hosts.

So this brings me to the next point, VMWare has literally shrank their market share but making it so their existing customers or a lot of people who may have used VMWare literally cannot use it (at least not with the latest 6.7 version).  Since there is not a lot of hardware that supports 6.7 the logical solution for many, even existing users is to simply migrate their VMWare VMs to something opensource based on KVM whether that would be Proxmox, oVirt, OpenStack etc…

Now, I do understand VMWare wants to prevent their marketshare and they’ve likely worked out agreements with hardware manufacturers on what gets obsoleted since a lot of large corporate customers will simply just buy brand new hardware that is supported.

But to me it’s just not a green solution when the same “obsolete” hardware is more than capable of supporting large scale computing infrastructure for a long time to come.  Computing power is so affordable and up there today the problem for hardware manufacturers is that so many organizations even with old hardware don’t need to upgrade (of course save for VMWare mandatory hardware obsoletion).

Aside from all of this VMWare is a fairly good system but I feel it is starting to quickly become attractive after reviewing a lot of community feedback and talking to colleagues in the industry.  There’s a huge push to migrate to KVM based virtualization and I feel the latest VMWare 6.7 will hasten this move.

Google Chrome now marking non-SSL sites as insecure

Another Google Unnecessity?

Previously Google’s Chrome was just marking sensitive sites where you would input things like credit card details as insecure (and rightfully so) but what’s happened in July of 2018 here is a different ball game.  They are now marking any sites that are not using SSL (including mine) as being insecure- a blog site that does nothing more than provide information…

Another strange thing is that Google is claiming that there are “performance benefits” to switch to SSL.  I am not aware of any performance benefits as the SSL handshake and encryption overhead itself only decreases performance.  Now I am not saying it is always significant and noticeable but it definitely silly to claim a negative performance feature as something that increases performance.  It’s like saying “we’ve added way more stairs to your daily walk” but “this results in improved stair climbing time”.

The one thing I and many others take issue with is that Google wields enormous power and has been known to abuse it for their benefit and the benefit of other large businesses, to the detriment of small business.  Google is perhaps the most powerful on the internet overall since they control Search, Youtube and they are a non-regulated for-profit business that is essentially going to be cutting off access and traffic to non-SSL sites.

While it is good for everything to use some sort of encryption it’s important to remember that not every site on the internet has the resources to setup their own SSL certificate. I am not talking only financially (although it is not very expensive to do) but on a technical level I can imagine a lot of people and organizations will not have the ability to do so.  In addition there are other technical steps required in some hosting environments such as often requiring a separate IP which requires a DNS update or migration (which is no simple feat for the non-technical).

I’ve always kept what I’ve thought of as “public domain” sites where I am publicly sharing the information on purpose as not needing SSL.  I am neither concerned for example with this site and articles who is reading or who can see what is being read.

I think part of the motivation here may be an SEO benefit or to weed out a lot of websites and owners which will happen to be smaller and less sophisticated.  This means that the average or smaller guy or company will be at a huge disadvantage on the web in Google Chrome where their users are scared off that viewing this article here without SSL is dangerous.

I think encouraging more sites to use SSL is a good idea but I also think it is a form of penalizing and reducing the views, traffic and audience of smaller organizations and businesses.

I’d also like to point out that the average key size is very small on average from 128bit to 256bit and I believe this is well within the ability of large supercomputing facilities to crack.  SSL and TLS has suffered from security flaws in recent years and if anything I think it is time to switch to something GPG based if we are serious about security.  I believe the current SSL implementations give us a false sense of security.

There are a lot of cheap solutions to do this but it all depends on how and where you are hosted and your level of expertise.

It’s also important to keep in mind that Google may give more weight to SSL sites in the search results than before if they are implementing this in Chrome (yes I am aware that supposedly SSL sites have ranked higher for awhile but I think the algorithm will be tweaked shortly if it hasn’t already to give much less weight to non-SSL sites).

Cheers!
A.Yasir

 

Trons Disappointed Fans

Fans That Disappoint

Fans and supporters took to social media to show their disappointment after the ‘secret announcement’  late last night. Justin Sun and the Tron Team apparently didn’t deliver the ‘marketing & shine’ that people expected.

They were expecting the ‘secret’ segment of the announcement to have more ‘pizazz’ and when they didn’t get it, sent off a twitter storm to console each other or complain. Some going as far expressing how they need to do better marketing and hire people who can do a better job presenting with more enthusiasm….

The other side were die hard supporters (who are capable of ignoring even the bad portions of what Tron does) were saying for once something I agree with: that it’s not about the marketing, it’s about the project.

This complaining about the lack of marketing is a serious issue for me. In an age of ICO scams because of all the marketing glitter, you would think the project is what matters most, not the marketing. If you only focus on the marketing you’re going to miss out. And many thousands missed the followings from the Tron Secret Announcement:

1. TVM- Tron virtual Machine. Is lightweight, Turing complete virutal machine devised for the development of Tron’s ecosystem. It aims to provide millions of global developers with a custom- built system for blockchain that’s efficient, stable, secure and easy to optimize, essentially developer-friendly.

2. 40 mainstream exchanges including Binance, Bittrex, Bitfinex, Upbit, Huoni and OKEx have completely the token migration from the worthless ERC20 tokens to Tron’s mainnet TRX without any financial loss so far for the users- although not all normal users/investors have made the exchange.

3. Tron mainnet so far is running smoothly.

4. There are a total of 92,424,664,154.355837 ERC20 Tokens burnt, accounting for 92.42% of the total issued. Before the migration is completed, the remaining ERC20 tokens will remain valid for exchange at Binance and Gate.io.

5. The TRON network will serve as the underlying protocol of Project Atlas. Hundreds of millions of BT users across the globe will become part of the TRON ecosystem. BT will be the largest application on the TRON network, which will allow TRON to surpass Ethereum on daily transactions and become the most influential public blockchain in the world.

As some might know from my previous post about Tron’s lack of care during this migration, I’m not exactly happy with Tron. The migration itself really unsettled me on their ability to actually do properly what they want with investors in mind. But this ‘marketing and glitter’ issue that most of the supporters and fans are complaining about, makes me uneasy that I’m investing in something that people heavily depend on to be marketing savvy.

Dogecoin is a better investment than many of these ICOs and new coins yet it lacks excitement and flair.  Cryptocurrency users and investors are inviting an ecosystem that rewards exciting sound scams and widgets that are guaranteed to fail.  This may work for the short-term but in the long-term it will be the difference between life and death for currency, projects and profit/loss to the investor.

Tron is actually a really good project, it’s made some great strides, and taking steps in the right direction. Do they need to be marketing hype all the time, especially when Investors like me just want the straight talk about what’s happening and what they are doing.

There’s a serious lack of common sense in cryptocurrency and the investors too.

This is money. This isn’t a high school popularity contest, where you need sparkles every time they hold a dance. And this should mean a lot coming from me, considering I am incredibly hard on Tron for their fiasco of a migration.

These are not issues. Their job is to make the project a success and take Tron to new levels, not to sit there with a marketing team to sparkle you with every meeting.

We have real issues here with Tron, and none of it has to do with their marketing. People really need to look at those issues instead of being upset about the marketing or Justin Sun’s lack of ability to wooo you.

What do you think, am I wrong? Should there be more marketing sparkle for cryptocurrency projects?

Cheers,
A.Yasir

 

Bitcoin Anonymity at what cost?

Wasabi Wallet

We’ve already heard of “tumblers” which make it very difficult to trace the true sender or receiver of a Bitcoin transaction.  Now we have the “Wasabi” wallet project, which does something a bit differently.  It actually uses the Tor network to anonymize you on the Bitcoin network.  However, I think this is a risky move because malicious actors on the Tor network (especially exit nodes) have been setup by malicious groups including government agencies for surveillance and other use.

The problem with depending on the Tor network and a third party client is what if someone injects malicious code such as the Bitcoin Gold client scam?  Even if that’s not the case what if some malicious Tor node runners get together and target Bitcoin users and use it to successfully trick the Wasabi client into thinking you’ve received money you don’t have?  It would certainly be an effort and tricky but with enough time, money and resources it is a likely possibility based on the reward value alone.

So, well the idea is well-intentioned I think trying to solve it any other way  is risky and it should be the Bitcoin code base that is modified to support these features.

Another personal alternative is that you can use your own personal proxy or server to hide your real IP as this is already a supported feature of the Bitcoin client itself.

What do you think?

Cheers!
A.Yasir

RAID in 2018

Still Not Quite Obsolete

I’ve talked to a lot of professionals in the IT industry and some surprisingly don’t even know what RAID is!  Others think it is unnecessary, while some think RAID is a replacement for backups still (something admins and hardware techs have been harping about for decades now).  First, I’ll give a quick introduction into what RAID is, what it isn’t and its applications in the real world.

RAID stands for Redundant Array of Independent Disks.  I think the term is a little bit unnecessary in todays’ world but let’s break it down.

First of all we are talking about an array of connected, separate hard disk drives.  These could be 2.5″, 3.5″, SAS, SATA or SSD as far as our implementation and OS they are all essentially the same to the computer that they are connected to.

There are 5 levels or versions of RAID as follows:

  1. RAID 0 AKA striping (two drives required).  This takes two identical hard drives and combines their performance and capacities to make what appears to be a single drive.  Performance with 0 is excellent but the disadvantage is that a failure of any single disk will result in dataloss and the array going offline.  There is no recovery except for backups.   I never recommend RAID 0.
  2. RAID 1 AKA mirroring (two drives required).  It is called mirroring because both drives contain an identical copy of the data. Performance is enhanced on reads because data can be read twice as fast but simultaneously reading from the 2 separate hard drives at once.  There is a performance penalty in terms of writing since the data must be written to both drives at once (however this is usually not an issue for most servers since the majority are read intensive on average).
  3. RAID 5 (3 + drives required).  RAID 5 has in the distant past been one of the most common RAIDs as it provides enhanced performance and some redundancy but it is very prone to faults, failures and slow rebuild times.  It uses a parity drive that is essentially spread between the others but this parity often results in performance degradation unless a hardware RAID card is used.    It can withstand a single drive failure but NOT 2 drives.  Performance of reads is good but the parity calculations slow down performance.
  4. RAID 6 (4+ drives required).  Similar to RAID 6 but two drives are used for parity so it could survive 2 drives failing and is more fault tolerant.  It takes even longer to rebuild on RAID 6 than RAID 5. Performance of reads is good but the parity calculations slow down performance.
  5. RAID 10 AKA 1+0 (requires 4 or more drives).  It is a combination of the sum of two RAID 1 arrays, striped together as a RAID 0.  It delivers excellent performance and is fault tolerant (a drive of each RAID 1 could die without any ill effect aside from some performance reduction).  Rebuild times are similar to RAID 1 and are much faster than RAID 5 or 6.

Rather than over complicating this issue I will try to give a practical take in 2018 of what RAID means.  Some have said RAID is obsolete but usually they are referring to the nearly impossible resync or rebuild times on large multi-terabyte RAID 5/6 arrays.  I would agree there as I’ve never liked RAID 5 or 6 and whether you like it or not it is very impractical to use.

So what is the best way to go?

RAID 1 If you only have 2 drives then I think RAID 1 is an excellent trade off.  It is quick and easy to resync/rebuild, a single drive can die and you will still not have any data loss, yet when both are active you have a performance boost in

RAID 10 If you have 4 drives you gain extra performance in a RAID 10 configuration with fault tolerance that a single drive on each RAID 1 could die without dataloss.

The main disadvantage is that with RAID 1 and RAID 10 you are essentially losing 50% of your storage space but since storage/drives are relatively cheap I think it’s been a worthy tradeoff.

There are some people who spout that “drives are more reliable today” and “you don’t need RAID anymore” but I hardly find this true.  I’d actually argue that SSD drives may be more unreliable or unpredictable than mechanical hard drives.  One thing we can all agree on is that the most likely component to fail in a server is a hard disk and that’s not likely to change any time soon as much as we like to believe flash based storage is more reliable.  I’d also ask anyone who thinks running on a single drive (even with backups) that isn’t the performance benefit and redundancy worth running RAID?  I’m sure most datacenter techs and server admins would agree that it is much better to hotswap/replace a disk than it is to deal with downtime and restoring from backups right?

Now for the warnings.  RAID “protection” is NOT a replacement for backups even if nothing ever dies.  The reason for this is to understand the misleading term of RAID “protection” that some in the industry use.  It is true in sense that you are protected from dataloss if a single drive fails (or possibly 2 in some RAID levels).  However this doesn’t take into account natural disasters, theft, accidental or willful deletion or destruction of data.

I’d say as it stands in 2018 and beyond that everyone should be using at least RAID1 or RAID10 if possible in nearly every use case.  There are a few possible exceptions to this rule but they are rare and even then you should aim for as much redundancy as possible.

In conclusion, if you can use RAID 1, preferably RAID 10. If you can’t use RAID, learn and use it anyway.

Cheers!
A.Yasir