VMWare 6.7 A Blast From The Past

Or A Blast To Their Market?

I’ve used VMWare on and off over the years but mainly during the pre-opensource days before the days of Virtualbox, KVM, Xen, OpenVZ etc…  and have dabbled and helped maintain some VMWare clusters over the years.

Anyone familiar with VMWare or who Google’s it will see lots of dire warnings about upgrading to the next version since the upgrades often break existing servers.  This is mainly not because of the Linux Kernel but VMWare seems to have a policy of blacklisting and hardcoding what network adapters, ILOs and CPUs are supported in each release.

Indeed the majority of blogs you will find deal exclusively with warnings of what is not supported and how to get around various restrictions.

But 6.7 seems like a marked departure from the standard.  It has dropped support for the majority of CPUs previously supported even up to 6.5.

I’ve also found it be fairly buggy especially getting vSphere working nicely on 6.7 ESXi hosts.

So this brings me to the next point, VMWare has literally shrank their market share but making it so their existing customers or a lot of people who may have used VMWare literally cannot use it (at least not with the latest 6.7 version).  Since there is not a lot of hardware that supports 6.7 the logical solution for many, even existing users is to simply migrate their VMWare VMs to something opensource based on KVM whether that would be Proxmox, oVirt, OpenStack etc…

Now, I do understand VMWare wants to prevent their marketshare and they’ve likely worked out agreements with hardware manufacturers on what gets obsoleted since a lot of large corporate customers will simply just buy brand new hardware that is supported.

But to me it’s just not a green solution when the same “obsolete” hardware is more than capable of supporting large scale computing infrastructure for a long time to come.  Computing power is so affordable and up there today the problem for hardware manufacturers is that so many organizations even with old hardware don’t need to upgrade (of course save for VMWare mandatory hardware obsoletion).

Aside from all of this VMWare is a fairly good system but I feel it is starting to quickly become attractive after reviewing a lot of community feedback and talking to colleagues in the industry.  There’s a huge push to migrate to KVM based virtualization and I feel the latest VMWare 6.7 will hasten this move.

Google Chrome now marking non-SSL sites as insecure

Another Google Unnecessity?

Previously Google’s Chrome was just marking sensitive sites where you would input things like credit card details as insecure (and rightfully so) but what’s happened in July of 2018 here is a different ball game.  They are now marking any sites that are not using SSL (including mine) as being insecure- a blog site that does nothing more than provide information…

Another strange thing is that Google is claiming that there are “performance benefits” to switch to SSL.  I am not aware of any performance benefits as the SSL handshake and encryption overhead itself only decreases performance.  Now I am not saying it is always significant and noticeable but it definitely silly to claim a negative performance feature as something that increases performance.  It’s like saying “we’ve added way more stairs to your daily walk” but “this results in improved stair climbing time”.

The one thing I and many others take issue with is that Google wields enormous power and has been known to abuse it for their benefit and the benefit of other large businesses, to the detriment of small business.  Google is perhaps the most powerful on the internet overall since they control Search, Youtube and they are a non-regulated for-profit business that is essentially going to be cutting off access and traffic to non-SSL sites.

While it is good for everything to use some sort of encryption it’s important to remember that not every site on the internet has the resources to setup their own SSL certificate. I am not talking only financially (although it is not very expensive to do) but on a technical level I can imagine a lot of people and organizations will not have the ability to do so.  In addition there are other technical steps required in some hosting environments such as often requiring a separate IP which requires a DNS update or migration (which is no simple feat for the non-technical).

I’ve always kept what I’ve thought of as “public domain” sites where I am publicly sharing the information on purpose as not needing SSL.  I am neither concerned for example with this site and articles who is reading or who can see what is being read.

I think part of the motivation here may be an SEO benefit or to weed out a lot of websites and owners which will happen to be smaller and less sophisticated.  This means that the average or smaller guy or company will be at a huge disadvantage on the web in Google Chrome where their users are scared off that viewing this article here without SSL is dangerous.

I think encouraging more sites to use SSL is a good idea but I also think it is a form of penalizing and reducing the views, traffic and audience of smaller organizations and businesses.

I’d also like to point out that the average key size is very small on average from 128bit to 256bit and I believe this is well within the ability of large supercomputing facilities to crack.  SSL and TLS has suffered from security flaws in recent years and if anything I think it is time to switch to something GPG based if we are serious about security.  I believe the current SSL implementations give us a false sense of security.

There are a lot of cheap solutions to do this but it all depends on how and where you are hosted and your level of expertise.

It’s also important to keep in mind that Google may give more weight to SSL sites in the search results than before if they are implementing this in Chrome (yes I am aware that supposedly SSL sites have ranked higher for awhile but I think the algorithm will be tweaked shortly if it hasn’t already to give much less weight to non-SSL sites).



Bitcoin Anonymity at what cost?

Wasabi Wallet

We’ve already heard of “tumblers” which make it very difficult to trace the true sender or receiver of a Bitcoin transaction.  Now we have the “Wasabi” wallet project, which does something a bit differently.  It actually uses the Tor network to anonymize you on the Bitcoin network.  However, I think this is a risky move because malicious actors on the Tor network (especially exit nodes) have been setup by malicious groups including government agencies for surveillance and other use.

The problem with depending on the Tor network and a third party client is what if someone injects malicious code such as the Bitcoin Gold client scam?  Even if that’s not the case what if some malicious Tor node runners get together and target Bitcoin users and use it to successfully trick the Wasabi client into thinking you’ve received money you don’t have?  It would certainly be an effort and tricky but with enough time, money and resources it is a likely possibility based on the reward value alone.

So, well the idea is well-intentioned I think trying to solve it any other way  is risky and it should be the Bitcoin code base that is modified to support these features.

Another personal alternative is that you can use your own personal proxy or server to hide your real IP as this is already a supported feature of the Bitcoin client itself.

What do you think?


Meltdown and Spectre Analysis and Current Status

There seems to be a lot of complacent or feel-good news that Meltdown and Spectre will solve themselves or that no worry or care should be taken from users but this couldn’t be further from the truth.  In reality while CPU makers say “there are no known cases of exploits” doesn’t do much to allay fears of those in the know.  This is because Spectre and Meltdown will not leave any trace or evidence that you’ve been hacked.  Although it can be argued that there may be some signs of unauthorized access if that was how access was gained.

However, the nature of Spectre and Meltdown allow for normal authorized users, programs and even scripts on websites to exploit you.  This is why it is so scary as there’s really no way to be certain you haven’t been breached.

It’s an issue for everyone because these exploits could impact anything from your bank, transportation/transit, airplanes, nuclear power plants, and basically anything else that relies on computing security since Meltdown and Spectre are a complete breakdown of those barriers.  I won’t go into more of the basic details but I did make a quick “take on the issue here“.

The good news

There were patches quickly released for a lot of Linux, Windows and Mac devices.  However this doesn’t mean that the users installed the patches or that all users have the ability or access to do so.  Take for example physically remote computers, devices and perhaps some that are running headless that may not be easily accessible or that for some reason have patches disabled (this is more common than you’d think in production or mission critical environments).

Then what about old and unsupported versions of operating systems or that old security system, phone, or TV box, or even ATM whose manufacturer may not be around anymore or is just simply not offering support?

It’s the same issue with many common worms and viruses, patches, and fixes may be issued but millions or more are often still affected long after for various reasons.

The bad news

Even if we assume that Google discovered these flaws first, and if we assume they weren’t mandated to be put there via ARM, AMD and Intel what about insiders who know about this back in June or even earlier on?   From that point a number of individuals and groups could have compromised or damaged sensitive data and computer systems.  There’s still time since a lot of devices and people will not be patched yet.

And to make things worse, the only true way to solve this issue is with a CPU microcode update, which is not simple to deploy especially on embedded devices and any mistake can lead to a bricked device.

These OS patches are just that “patch work”, a hack or work around to mitigate the issue.

Then there’s the question of “we know there are 3 variants or vectors of attack”.  What if there are others that are not yet discovered?  You can be well equipped and funded organizations/hacking groups are working on this as we speak and they certainly won’t be disclosing it.  Until all devices have microcode updates there’s no way to certain we are safe from unknown vectors related to Spector and Meltdown.

What can you do?

Simply look out for the latest updates for your devices/phones/computers and install the update but don’t falsely assume a new update means you are protected unless you’ve read so that “this update fixes the Spectre and Meltdown” issue.

My Take On Meltdown and Spectre Computer Security Flaws

Spectre and Meltdown allow a non-privileged user (non-root/non-Admin)  to access memory they aren’t supposed to essentially dissolving the majority of computing security and privacy barriers.  This could be a guest user collecting sensitive information/passwords for an entire database, group of users, network etc..

If you are using any computing device whether it be an ARM based device, Intel CPU (although Intel is the worst offender at this point), AMD CPU this issue affects you and billions of other devices and users around the world.  Whether you are on Linux, Unix, Windows, Mac this applies to you.  It is really an unmitigated scandal and disaster for both privacy, security and even safety with long lasting and wide ranging ramifications that will continue to playout for years.

I’ve made a comment in the past about security, IOT and how there are many devices that are now unsupported or can’t be updated leading to huge security issues.  We are now unfortunately there and have been since 1995.

This issue was first reported by Google Project Zero and they are known as the Meltdown and Spectre Vulnerabilities that affect all microprocessors made since 1995 (the modern computing era).

To make it worse there are 3 known “variants” or attack vectors known (I suggest there may be more that are undisclosed or not yet known to the public).  With variants 1,2 being very similar (known as Spectre) and variant 3 known as Meltdown.

  • Variant 1: bounds check bypass (CVE-2017-5753)
  • Variant 2: branch target injection (CVE-2017-5715)
  • Variant 3: rogue data cache load (CVE-2017-5754)

The attack is possible due to “speculative execution” where CPUs (computer chips) essentially try to predict future work needed and will actually do sometimes unneeded work as the performance hit for doing this is less than waiting to execute the instructions later.   This means the computer sometimes performs work that isn’t needed and not used to increase performance, where things have gotten bad is through this feature, it’s possible for a normal user/process to gain unrestricted access to memory that you shouldn’t have access to.

What is Spectre?

The primary variants (1,2) that make up Spectre  rely on the user exploiting the speculative feature of the CPU to write to memory under their control.  This allows a normal user to read basically all memory processes allowing keys, passwords and confidential data to be intercepted.  AMD Claims that Variant #2 does not impact them as well.



What is Meltdown?

Meltdown is the third and more serious and nasty variant that still relies on the speculative execution exploit/flaw but actually allows the attacker to read arbitrary memory (so basically anywhere at will).  The key feature of Meltdown is that it is the easiest attack to perform and it has been demonstrated on the Intel platform already.

The only good news is that apparently this Meltdown attack only affects Intel and not AMD.



Redhat has also done an excellent writeup about the issue here:


How To Protect Yourself

First and foremost you should update your devices as soon as patches become available.  In Linux enabling KPTI can protect you.   However for some major distributions of Linux users are still waiting for a patch.

If you are vulnerable and performing critical operations it’s time to make tough choices including possibly turning off your machines or denying all non-admin users access to a server/services if possible.

Ensuring rotation of keys and passwords can also mitigate your risks even if passwords have been compromised.

It comes down to good security practices all around such as segregating services to different physical machines, restricting physical and virtual user access.

If possible remove all non-essential or untrusted applications from your device/computer/server.

Dedicated Servers Will Become More Popular

There has been a huge trend to put everything into the Cloud, one that I have reservations with despite owning companies that offer our own private Cloud.

Fortunately we haven’t been impacted by Spectre and Meltdown and are not vulnerable but it does raise questions from our clients that we’ve mentioned before.

I’ve always advocated for physical segregation, which means that if possible you should have your own physical dedicated server that is encrypted and running a minimum set of services with as a few users as possible.  By doing this you significantly reduce your risk in a scenario like this by putting your company database, e-mail, VPN, websites, file server on physically different servers.

Serious Questions and Concerns Raised

I would raise the question that is it really possible that such a wide-ranging exploit was completely unknown for this long until a team from Google discovered it?  Considering the budgets of major intelligence agencies around the world who are constantly looking to find exploits of their own it is conceivable that this vulnerability may have been exploited for far longer than it was publicly known by specific groups.

Another one is Intel’s response to it by apparently being accused of singling out AMD when as of now, Intel is far more vulnerable.

Since these chip makers are all US based is it possible they were mandated by law to introduce speculative execution in such a similar way that this vulnerability would be possible?  Considering recent revelations I don’t think it would be inconceivable.

Are there more than 3 variants and if we assume that no one else really knew about Variants 1-3 is it not possible that a well-armed team could find new ways to exploit them?

Long-term Value for Intel, AMD and ARM

At the time of writing Intel’s stock was down about 3% but this could get worse for either of these companies if one’s vulnerabilities keep increasing and/or one of them is hit with a larger exploit.


It’s hard to give an honest conclusion as we’re just getting started and this is all we know about the Variants 1,2 (Spectre) and Meltdown.  So far it looks like we were lucky to choose AMD.  The key issue that will come out of this is how many devices and users will remain vulnerable by being unable to patch or if they have a device that cannot be easily patched or there is no longer any support from the vendor?  This would increase the amount of zombies and data security breaches several fold.

This is also a good time and a wakeup call for all companies to do a security audit and if they don’t have dedicated security staff, to bring in some good IT and security auditors to assess and mitigate these risks before they become costly losses.

Why Don’t I/We Use RHEL Red Hat Enterprise Linux Instead of Centos?

This a question one of our good client’s asked me one day and I have to admit I wasn’t prepared for this one, it’s something we’ve never put active thought in but was rather a matter of instinct. While we do use various OS’s for different platforms including our own in-house Linux, Cloud, Hosting Control Panel, applications and clients including BSD based OS’s such as FreeBSD, this is something we’ve never been asked.

RHEL (Red Hat Enterprise Linux) has been a clear leader since the early days of providing a standardized, mission critical platform for business applications in the Linux/Unix environment. It was actually my first Linux install back when I was in highschool and I’ve personally maintained Unix/Linux systems for over 16 years now. In that time I have found the strengths and weaknesses of RHEL in terms of our business and clients. However, the Centos project is a legal clone of RHEL with the only difference being artwork and the name Centos it functions identically as RHEL and is completely Open Source.

Since our team of experts does everything in-house we don’t really on Vendor support, it means when something is going on with a server we can solve the problem on our own very quickly with our own team. We don’t have the need to call a third party and ask how-to fix the problem and in fact it’s quicker for us to just do it ourselves whereas I’ve learned many organizations heavily rely on this type of third-party support. The goal with my ventures has always been that our teams should be self-sufficient for both security and efficiency.

Centos being Open Source is a huge advantage for us, we can customize and redistribute OS’s and deploy them on servers without having to touch a button or connect to monitor or KVM (yes I do realize RHEL can be installed headless/kickstart but not in the way we deploy our custom OS images-possibly for another blog). The only con with Centos is that there is a small delay in updates since it depends on the upstream source which is RHEL but this is a minor issue and all major updates are done to Centos almost instantly.

To conclude, my hats off to Linus Torvalds for inventing the Linux Kernel, the RHEL team and especially the CentOS team and I hope this explains why CentOS for my company’s is the best fit for our needs at the moment.

Green Low-Power Alternative Computing Intel NUC Boxes

The key features of low power NUC boxes are that they are small,lightweight,portable,efficient and the low power means less heat and energy savings which all mean “Green Efficient Computing for the Environment”.  In addition consider that you could run these units without power on a UPS for a much longer period of time than most laptops or Desktop computers.  In an emergency or any issue with lack of power, these units will be first and foremost.

As a bit of continuation of my Green Computing talk these NUC boxes take it to the next level from Intel using low power laptop DDR3L SODIMM memory.  I recently bought a barebone Intel NUC J3455 Box for my wife and was impressed at the power usage (literally 10W to the wall at 110V!) and it is still a Quad-Core albeit with slightly lower CPU frequency of just 1.5Ghz but it works great for most functions.  I was able to upgrade it easily to 8GB of RAM (2x4GB), it has an SD card slot built-in, HDMI, VGA and 4 USB 3.0 ports and a 2.5″ SATA 3.0 port that I plugged a 256GB SSD into and installed Ubuntu/Linux Mint on.  It works quite well but there appear to be a few bugs and fidgeting required, for example the NIC cable came loose and it wouldn’t work until I replugged and rebooted it (actually the BIOS stopped showing the NIC at all so somehow it got disabled on its own and there was no option to rThe cheaper one on the left is bare-ones and e-enable or disable it in the BIOS).  You also have to disable the C-Step functions or the CPU doesn’t work properly.  In Linux it looks like there is a bug in the Intel Graphics driver for this model that sometimes causes the graphics/mdm to be restarted. Aside from the tinkering it is well worth the cost saving and works well and reliably.

One thing I will say I am a little surprised at is that the unit does get fairly hot if you are using it heavily and there is no fan in the Intel NUC which means things do get a bit hot (but nothing compared to a laptop).  It comes down to HDD, RAM and CPU being cramped into an incredibly small package.

The Vorke V1 J3160 is basically the same thing as above but does not have the 2 RAM slots (only a single).  However it is priced well and only uses 6W instead of 10W, quite the power savings!  I have purchased the unit below but have not had a chance to test it but I am hoping for the same or better results as the Intel J3455 unit above.  I love how this one comes with 4GB RAM and 64GB SSD out of the box and includes Windows so it’s ready to go out of the box (many like me will just be installing Linux though).  It can serve as an excellent backup box (eg. plug it in somewhere else and hook up a bunch of large USB 3.0 HDDs and keep another copy of your data). Or in my case it could just be an excellent “stand-by” computer with a mirror of your current config “just in case” your main unit goes down you could get going instantly with the backup one.  One other feature I am hoping for in this unit is to see the BIOS is a little more stable and less buggy than Intel’s.  Finally I am hopeful the unit will run cooler due to the 6W CPU and built-in case fan which I think Intel may want to consider too.   Time will tell but I am looking forward to getting a chance to open and test the Vorke V1 and hope they will keep producing similar units.

Dedicated Server Uptime Samples

I just logged into two random dedicated servers and I am always happy about the time uptimes we have:

13:05:37 up 960 days, 21 min,  1 user,  load average: 0.00, 0.01, 0.05

14:11:14 up 835 days, 18:01,  6 users,  load average: 0.09, 0.02, 0.01

In the case of both servers they have never been down, they were literally installed on a rack from the time shown above.

The reason our uptime is always fantastic is not only because our facilities being out of the core disaster areas.  We never overload or oversell our servers.  We are not a budget provider, but still offer excellent value in my opinion.  We’ve had a lot of clients switch to us from other hosts primarily based on the reasoning “no amount of features or gimmicks in the world matter if you have an unreliable service”.


Rebooting a Linux Dedicated server with no hard drives from the shell

I just thought I would finally test this so I simulated a complete RAID array failure by pulling all of the drives at once.

This results in an input/output error when trying to do anything so the question is can you still reboot in this situation?

[root@testserver /]# reboot
-bash: /sbin/reboot: Input/output error
[root@testserver /]# shutdown -rn now
-bash: /sbin/shutdown: Input/output error
[root@testserver /]# shutdown
-bash: /sbin/shutdown: Input/output error
[root@testserver /]# uptime
13:47:10 up 41 min,  1 user,  load average: 0.00, 0.00, 0.00

Reboot by sending commands directly to /proc

[root@testserver /]# echo 1 > /proc/sys/kernel/sysrq
[root@testserver /]# echo b > /proc/sysrq-trigger

And sure enough the server rebooted, it could be handy if someone has a remote server without remote hands or remote-reboot (in this case we have both on-site so there was no risk and this was a test server).

What dmesg looks like when the drives are removed and arrays degraded:
[  559.302943] ata3: exception Emask 0x10 SAct 0x0 SErr 0x1810000 action 0xe frozen
[  559.302988] ata3: SError: { PHYRdyChg LinkSeq TrStaTrns }
[  559.303048] ata3: hard resetting link
[  559.303054] ata3: nv: skipping hardreset on occupied port
[  560.024048] ata3: SATA link down (SStatus 0 SControl 300)
[  565.024048] ata3: hard resetting link
[  565.024054] ata3: nv: skipping hardreset on occupied port
[  565.327053] ata3: SATA link down (SStatus 0 SControl 300)
[  565.327064] ata3: limiting SATA link speed to 1.5 Gbps
[  570.327045] ata3: hard resetting link
[  570.327050] ata3: nv: skipping hardreset on occupied port
[  570.630048] ata3: SATA link down (SStatus 0 SControl 300)
[  570.630059] ata3.00: disabled
[  570.630078] ata3: EH complete
[  570.630087] sd 2:0:0:0: rejecting I/O to offline device
[  570.630104] ata3.00: detaching (SCSI 2:0:0:0)
[  570.630125] sd 2:0:0:0: [sda] killing request
[  570.630153] md: super_written gets error=-5, uptodate=0
[  570.630159] md/raid10:md2: Disk failure on sda2, disabling device.
[  570.630162] md/raid10:md2: Operation continuing on 1 devices.
[  570.630257] end_request: I/O error, dev sda, sector 58605128
[  570.630291] md: super_written gets error=-5, uptodate=0
[  570.633517] sd 2:0:0:0: [sda] Synchronizing SCSI cache
[  570.633651] sd 2:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  570.633659] sd 2:0:0:0: [sda] Stopping disk
[  570.633680] sd 2:0:0:0: [sda] START_STOP FAILED
[  570.633684] sd 2:0:0:0: [sda]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  570.655206] RAID10 conf printout:
[  570.655210]  — wd:1 rd:2
[  570.655214]  disk 0, wo:0, o:1, dev:sdb2
[  570.655217]  disk 1, wo:1, o:0, dev:sda2
[  570.659025] RAID10 conf printout:
[  570.659029]  — wd:1 rd:2
[  570.659032]  disk 0, wo:0, o:1, dev:sdb2
[  570.659313] md: md1 still in use.
[  570.738752] md: md2 still in use.
[  570.739106] md/raid1:md1: Disk failure on sda3, disabling device.
[  570.739109] md/raid1:md1: Operation continuing on 1 devices.
[  570.739380] md/raid10:md0: Disk failure on sda1, disabling device.
[  570.739382] md/raid10:md0: Operation continuing on 1 devices.
[  570.739412] md: unbind<sda2>
[  570.747449] md: export_rdev(sda2)
[  570.868144] RAID1 conf printout:
[  570.868148]  — wd:1 rd:2
[  570.868168]  disk 0, wo:0, o:1, dev:sdb3
[  570.868175]  disk 1, wo:1, o:0, dev:sda3
[  570.873025] RAID1 conf printout:
[  570.873029]  — wd:1 rd:2
[  570.873032]  disk 0, wo:0, o:1, dev:sdb3
[  570.999292] md: unbind<sda3>
[  571.007119] md: export_rdev(sda3)
[  573.633246] ata4: exception Emask 0x10 SAct 0x0 SErr 0x1810000 action 0xe frozen
[  573.633292] ata4: SError: { PHYRdyChg LinkSeq TrStaTrns }
[  573.633331] ata4: hard resetting link
[  573.633335] ata4: nv: skipping hardreset on occupied port
[  574.354052] ata4: SATA link down (SStatus 0 SControl 300)
[  579.354032] ata4: hard resetting link
[  579.354037] ata4: nv: skipping hardreset on occupied port
[  579.657041] ata4: SATA link down (SStatus 0 SControl 300)
[  579.657052] ata4: limiting SATA link speed to 1.5 Gbps
[  584.657032] ata4: hard resetting link
[  584.657038] ata4: nv: skipping hardreset on occupied port
[  584.960047] ata4: SATA link down (SStatus 0 SControl 300)
[  584.960058] ata4.00: disabled
[  584.960076] ata4: EH complete
[  584.960086] sd 3:0:0:0: rejecting I/O to offline device
[  584.960094] ata4.00: detaching (SCSI 3:0:0:0)
[  584.960124] sd 3:0:0:0: [sdb] killing request
[  584.960148] md: super_written gets error=-5, uptodate=0
[  584.960220] end_request: I/O error, dev sdb, sector 58605120
[  584.960265] md: super_written gets error=-5, uptodate=0
[  584.960322] end_request: I/O error, dev sdb, sector 58605128
[  584.960357] md: super_written gets error=-5, uptodate=0
[  584.960393] end_request: I/O error, dev sdb, sector 58605128
[  584.960428] md: super_written gets error=-5, uptodate=0
[  584.962495] sd 3:0:0:0: [sdb] Synchronizing SCSI cache
[  584.962765] sd 3:0:0:0: [sdb]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  584.962772] sd 3:0:0:0: [sdb] Stopping disk
[  584.962786] Buffer I/O error on device md2, logical block 524292
[  584.962805] sd 3:0:0:0: [sdb] START_STOP FAILED
[  584.962810] sd 3:0:0:0: [sdb]  Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[  584.962824] lost page write due to I/O error on md2
[  584.962841] end_request: I/O error, dev sdb, sector 58605128
[  584.962877] md: super_written gets error=-5, uptodate=0
[  584.962921] md: md1 still in use.
[  584.962931] Buffer I/O error on device md2, logical block 524293
[  584.963007] lost page write due to I/O error on md2
[  584.963020] Buffer I/O error on device md2, logical block 1048646
[  584.963095] lost page write due to I/O error on md2
[  584.963104] Buffer I/O error on device md2, logical block 1048647
[  584.963179] lost page write due to I/O error on md2
[  584.963188] Buffer I/O error on device md2, logical block 1048648
[  584.963274] lost page write due to I/O error on md2
[  584.963280] md: md2 still in use.
[  584.963299] Buffer I/O error on device md2, logical block 1048694
[  584.963381] lost page write due to I/O error on md2
[  584.963391] Buffer I/O error on device md2, logical block 1056863
[  584.963468] lost page write due to I/O error on md2
[  584.963478] Buffer I/O error on device md2, logical block 1056864
[  584.963553] lost page write due to I/O error on md2
[  584.963562] Buffer I/O error on device md2, logical block 6299690
[  584.963635] lost page write due to I/O error on md2
[  584.963800] Aborting journal on device md2-8.
[  584.963836] EXT4-fs error (device md2) in ext4_delete_inode: Readonly filesystem
[  584.963868] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 987 pages, ino 28972747; err -30
[  584.963877] md: super_written gets error=-19, uptodate=0
[  584.963883]
[  584.963888] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 7896 pages, ino 28972690; err -30
[  584.963893]
[  584.963953] EXT4-fs warning (device md2): ext4_end_bio: I/O error writing to inode 28972747 (size 36864 starting block 689771)
[  584.964303] JBD2: I/O error detected when updating journal superblock for md2-8.
[  584.964309] EXT4-fs error (device md2): ext4_journal_start_sb: Detected aborted journal
[  584.964316] EXT4-fs (md2): Remounting filesystem read-only
[  584.972785] md0: detected capacity change from 30005002240 to 0
[  584.972794] md: md0 stopped.
[  584.972810] md: unbind<sdb1>
[  584.979298] md: export_rdev(sdb1)
[  584.979344] md: unbind<sda1>
[  584.987280] md: export_rdev(sda1)
[  585.165084] md: super_written gets error=-19, uptodate=0
[  585.165102] md: super_written gets error=-19, uptodate=0
[  589.309845] EXT4-fs error (device md2): ext4_find_entry: reading directory #262476 offset 0
[  589.963162] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 1024 pages, ino 263495; err -30
[  589.963314]
[  599.310238] EXT4-fs error (device md2): ext4_find_entry: reading directory #262476 offset 0
[  604.963046] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 1024 pages, ino 262464; err -30
[  604.963153]
[  609.310592] EXT4-fs error (device md2): ext4_find_entry: reading directory #262476 offset 0
[  614.963071] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 1024 pages, ino 28186168; err -30
[  614.963176]
[  614.963181] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 8192 pages, ino 28186171; err -30
[  614.963298]
[  614.963301] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 8192 pages, ino 28972747; err -30
[  614.963405]
[  614.963408] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 8192 pages, ino 28972690; err -30
[  614.963507]
[  619.310906] EXT4-fs error (device md2): ext4_find_entry: reading directory #262476 offset 0
[  619.963133] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 1024 pages, ino 263495; err -30
[  619.963244]
[  629.311267] EXT4-fs error (device md2): ext4_find_entry: reading directory #262476 offset 0
[  634.963038] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 1024 pages, ino 262464; err -30
[  634.963144]
[  639.311561] EXT4-fs error (device md2): ext4_find_entry: reading directory #262476 offset 0
[  644.963069] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 1024 pages, ino 28186168; err -30
[  644.963172]
[  644.963176] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 8192 pages, ino 28186171; err -30
[  644.963288]
[  644.963291] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 8192 pages, ino 28972747; err -30
[  644.963395]
[  644.963397] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 8192 pages, ino 28972690; err -30
[  644.963499]
[  649.311846] EXT4-fs error (device md2): ext4_find_entry: reading directory #262476 offset 0
[  649.963202] EXT4-fs (md2): ext4_da_writepages: jbd2_start: 1024 pages, ino 263495; err -30
[  649.963319]
[  653.202216] ata3: exception Emask 0x10 SAct 0x0 SErr 0x50000 action 0xe frozen
[  653.202317] ata3: SError: { PHYRdyChg CommWake }
[  653.202379] ata3: hard resetting link