<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Scott's Blog]]></title><description><![CDATA[Random mumblings of a Global SE]]></description><link>https://blog.docbert.org/</link><generator>Ghost 5.87</generator><lastBuildDate>Sat, 21 Mar 2026 02:48:41 GMT</lastBuildDate><atom:link href="https://blog.docbert.org/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Major Boost Mobile Website Vulnerability (BOLA)]]></title><description><![CDATA[<p>Dish Network is the 4th largest mobile provider in the US, with around 8.5 million customers - the majority of those being under the companies &quot;Boost Mobile&quot; brand, which it acquired from Sprint in 2021.</p><!--kg-card-begin: html--><style>strong {
color: #FF2A00;
font-weight: bold strong;
  }
</style><!--kg-card-end: html--><p>A few weeks ago whilst playing</p>]]></description><link>https://blog.docbert.org/boost-mobile-vulnerability/</link><guid isPermaLink="false">657fa1afcc19c60001cb05c0</guid><category><![CDATA[Boost-Mobile]]></category><category><![CDATA[Dish-Network]]></category><category><![CDATA[hacking]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Mon, 29 Aug 2022 15:01:00 GMT</pubDate><content:encoded><![CDATA[<p>Dish Network is the 4th largest mobile provider in the US, with around 8.5 million customers - the majority of those being under the companies &quot;Boost Mobile&quot; brand, which it acquired from Sprint in 2021.</p><!--kg-card-begin: html--><style>strong {
color: #FF2A00;
font-weight: bold strong;
  }
</style><!--kg-card-end: html--><p>A few weeks ago whilst playing with Boost Mobile&apos;s &quot;BoostOne&quot; app I stumbled across a <strong>security flaw that allowed access to the account information (including phone number, email, physical address and more), payment details/history, and bill information for any Boost Mobile user</strong>. More importantly, the flaw <strong>also allowed changing the users PIN, which would allow anyone to port out or &quot;<a href="https://en.wikipedia.org/wiki/SIM_swap_scam?ref=blog.docbert.org">sim swap</a>&quot; the users account!</strong> &#xA0;All that is needed to access this information is either the users phone number or their email address!</p><p>This vulnerability also appears to have affected customers of Dish Network&apos;s new Project Genesis 5G network.</p><p>I reached out to Dish Network to report this issue, and whilst their response was far from stellar (that&apos;ll be the subject of a future blog post!), it does appear that they fixed the issue within about 10 days of me first reaching out to them. At this point they still haven&apos;t confirmed to me that it&apos;s been fixed, however the problem no longer exists so it appears that it has been.</p><p>Enough intro, lets dig into the vulnerability...</p><p>Dish/Boost Mobile use a standard OAUTH-style authentication mechanism, where a username and password are used to generate a long-expiry &quot;Refresh Token&quot;, which in turn is used to generate a short-lived &quot;Access Token&quot;. &#xA0;In this example we have an refresh token that corresponds to the user &apos;<em>scott@doc.net.au</em>&apos;, and we&apos;re using this to request an access token.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/01-refreshtoken.png" class="kg-image" alt loading="lazy" width="1025" height="212" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/01-refreshtoken.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/01-refreshtoken.png 1000w, https://blog.docbert.org/content/images/2022/08/01-refreshtoken.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>For simplicity, I&apos;ve saved this access token in a variable, and the same access token is then used for all future requests. Remember, this access token corresponds to the &apos;<em>scott@doc.net.au</em>&apos; account, so we would only expect it to have access to data for that account.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/02-token.png" class="kg-image" alt loading="lazy" width="1025" height="86" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/02-token.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/02-token.png 1000w, https://blog.docbert.org/content/images/2022/08/02-token.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>Using this token, we can make a request for account information. This returns numerous details for the account, including the owners name, address, phone number, email address, etc. &#xA0;Of course as these details are for my own account there&apos;s no security issue here - this is information that I would expect to have access to.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/03-mine.png" class="kg-image" alt loading="lazy" width="1025" height="316" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/03-mine.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/03-mine.png 1000w, https://blog.docbert.org/content/images/2022/08/03-mine.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>What is interesting about this request is the actual request URL itself - it includes the username that the information is being requested for. As the only information returned should be for the user that is logged in it doesn&apos;t seem that there should be a need for that parameter to be passed.</p><p>If we try the same call without the username parameter, we get an error :</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/03a-nousername.png" class="kg-image" alt loading="lazy" width="1025" height="148" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/03a-nousername.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/03a-nousername.png 1000w, https://blog.docbert.org/content/images/2022/08/03a-nousername.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>So seemingly the username is being used for the query, it&apos;s not just extraneous information in the request. </p><p>So what happens if we tried and request information for a different user? Obviously the correct action would be to deny that request. &#xA0;Unfortunately, that&apos;s not at all what happened!</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/04-Other1-1.png" class="kg-image" alt loading="lazy" width="1025" height="421" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/04-Other1-1.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/04-Other1-1.png 1000w, https://blog.docbert.org/content/images/2022/08/04-Other1-1.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>In this case we&apos;ve request account details for a different username (<em>xxx@wow.com</em>), whilst still using the authentication credentials for the initial account. &#xA0;This request should have been denied, but it was NOT! &#xA0;Instead we are given full account details for the other user! <strong>This data includes name, email address, physical address, phone number, phone details/IMEI and more</strong>.</p><p>(Note that in this case both of the accounts being used here are my accounts, however they are completely separate/unrelated accounts. They have separate email address, physical addresses, phone numbers, and even a slightly different surname between the two. Boost Mobile has no way of knowing that these two accounts belong to the same person - for all due purpose these accounts belong to completely separate people)</p><p>This type of issue is known as a &quot;<em>Broken Object Level Authorization</em>&quot; (BOLA) vulnerability. The website is correctly enforcing authentication (you require a valid token to access it), however it is not enforcing authorization at the object-level so data for all users can be accessed, not just the user that is authenticated. BOLA is listed at #1 on the <a href="https://owasp.org/www-project-api-security/?ref=blog.docbert.org">OWASP API Security list</a>.</p><p>The attack above would require us to first know the email address for the user whos account we were trying to gain information on, but it turns out we don&apos;t even need that, as the username parameter can also be passed a phone number.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/05-Other2-1.png" class="kg-image" alt loading="lazy" width="1025" height="421" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/05-Other2-1.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/05-Other2-1.png 1000w, https://blog.docbert.org/content/images/2022/08/05-Other2-1.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>Once again, this was using a phone number that does not correspond to the authentication token that we&apos;re using to make the request.</p><p>In addition to the data returned by the &apos;account&apos; request, we can also request additional details around billing, payments, etc. &#xA0;These use a URL that includes the account ID for the account - which is included in the output above. &#xA0;(&quot;id&quot;:&quot;223xxxxxx066&quot;).</p><p>&quot;Bill&quot; contains details of the users upcoming bill.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/06-bill.png" class="kg-image" alt loading="lazy" width="1025" height="505" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/06-bill.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/06-bill.png 1000w, https://blog.docbert.org/content/images/2022/08/06-bill.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>&quot;Payment Method&quot; shows the details for the credit card setup to make this next payment - although thankfully only the last 4 digits plus the expiry and not the full number, although the &quot;token&quot; field could also be of interest.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/07-paymentmethods-1.png" class="kg-image" alt loading="lazy" width="1025" height="233" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/07-paymentmethods-1.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/07-paymentmethods-1.png 1000w, https://blog.docbert.org/content/images/2022/08/07-paymentmethods-1.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>&quot;Payment History&quot; shows details of prior payments, including URLs pointing to further details of those payments including the card used to make them (again, only the last 4 digits/expiry).</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/08-paymenthistory.png" class="kg-image" alt loading="lazy" width="1025" height="275" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/08-paymenthistory.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/08-paymenthistory.png 1000w, https://blog.docbert.org/content/images/2022/08/08-paymenthistory.png 1025w" sizes="(min-width: 720px) 720px"></figure><!--kg-card-begin: markdown--><h2 id="changing-data">Changing Data!</h2>
<!--kg-card-end: markdown--><p>To this point we have only been querying information from the API, however there are also multiple fields that can be updated - including the users address, their password (which requires the current password so seemingly can&apos;t be abused), and their PIN code! &#xA0;This PIN code is what is used when authenticating to a Boost agent via website chat, social media or the phone, as well as being what is used to port a phone number out of Boost.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/09-pin.png" class="kg-image" alt loading="lazy" width="1025" height="169" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/09-pin.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/09-pin.png 1000w, https://blog.docbert.org/content/images/2022/08/09-pin.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>Note that while this response says the PIN has been &quot;set up&quot; successfully, it works the same even if a PIN has already been set on the account. Unlike the password, no additional authentication is required to set/change the PIN.</p><p>The impact of this flaw is significant. &#xA0;Starting with only an email address or a phone number we are able to reset the PIN number and then port out the phone number - what is commonly known as a <strong>SIM Swap Attack</strong>, and is commonly used by criminals to bypass 2 factor authentication mechanisms.</p><hr><!--kg-card-begin: markdown--><h2 id="only-boost-mobile">Only Boost Mobile?</h2>
<!--kg-card-end: markdown--><p>Dish Network, the parent company behind Boost Mobile, actually owns/runs multiple phone providers in the US, including their new 5G network &quot;<a href="https://www.5gmobilegenesis.com/?ref=blog.docbert.org">Project Genesis</a>&quot;</p><p>All of the API URLs above included a &quot;tenant&quot; parameter which was set to &quot;BOOST&quot;. Changing this to something else gives an Invalid tenant error, implying that this field is indeed being used to set the provider.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/10-invalidtenant-1.png" class="kg-image" alt loading="lazy" width="1025" height="149" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/10-invalidtenant-1.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/10-invalidtenant-1.png 1000w, https://blog.docbert.org/content/images/2022/08/10-invalidtenant-1.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>Changing the tenant to &quot;GENESIS&quot; did NOT give this invalid tenant error, but instead simply returned an empty set of results - implying that this issue existed not just for Boost Mobile customers, but also for Project Genesis customers. Unfortunately I don&apos;t have access to a Project Genesis phone number/email which would allow me to confirm this, but it does appear that these customers are affected as well.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/11-genesistenant-1.png" class="kg-image" alt loading="lazy" width="1025" height="147" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/11-genesistenant-1.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/11-genesistenant-1.png 1000w, https://blog.docbert.org/content/images/2022/08/11-genesistenant-1.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>The empty response is the same as the response that is returned when querying the &quot;Boost&quot; tenant with a phone number (username) that is not a Boost customer - so it&apos;s an indication that there is no account for that phone number in the tenant being queried. &#xA0;</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2022/08/12-invalidphone.png" class="kg-image" alt loading="lazy" width="1025" height="148" srcset="https://blog.docbert.org/content/images/size/w600/2022/08/12-invalidphone.png 600w, https://blog.docbert.org/content/images/size/w1000/2022/08/12-invalidphone.png 1000w, https://blog.docbert.org/content/images/2022/08/12-invalidphone.png 1025w" sizes="(min-width: 720px) 720px"></figure><p>Although I have not tested enough to be sure, it does not appear that there is any rate-limiting on this query, meaning it would be feasible to brute-force/enumerate phone numbers using this query looking for valid Boost Mobile customers.</p><hr><!--kg-card-begin: markdown--><h2 id="other-issues-with-the-boost-mobile-websiteapi">Other Issues with the Boost Mobile website/API</h2>
<!--kg-card-end: markdown--><p>Although the issue described above is by far the most critical issue I&apos;ve found, there are a number of other issues with the Boost website/API. &#xA0;I&apos;ll keep those for another day...</p><hr><!--kg-card-begin: markdown--><h2 id="references">References</h2>
<!--kg-card-end: markdown--><ul><li><a href="https://owasp.org/www-project-api-security/?ref=blog.docbert.org">OWAWP API Top 10</a> list, with <a href="https://github.com/OWASP/API-Security/blob/master/2019/en/src/0xa1-broken-object-level-authorization.md?ref=blog.docbert.org">Broken Object Level Authorization</a> at #1 on the list.</li><li>&quot;<a href="https://arstechnica.com/information-technology/2017/10/t-mobile-website-bug-apparently-exploited-to-mine-sensitive-account-data/?ref=blog.docbert.org">T-Mobile customer data plundered thanks to bad API</a>&quot; - ARS Technica</li><li>&quot;<a href="https://inonst.medium.com/a-deep-dive-on-the-most-critical-api-vulnerability-bola-1342224ec3f2?ref=blog.docbert.org">A Deep Dive On The Most Critical API Vulnerability &#x2014; BOLA</a>&quot; - Inon Shkedy</li></ul>]]></content:encoded></item><item><title><![CDATA[How Not to Handle a Security Breach]]></title><description><![CDATA[<p>A few months ago I wrote about how I had gained access to <a href="https://blog.docbert.org/hacking-82-hotels-at-once/">82 separate hotel IPTV systems within an hour or so</a>.</p><p>What I didn&apos;t mention in that post was that I actually discovered the issue around 6 months before that post was published, but had spent</p>]]></description><link>https://blog.docbert.org/how-not-to-handle-a-security-breach/</link><guid isPermaLink="false">657fa1afcc19c60001cb05be</guid><category><![CDATA[hacking]]></category><category><![CDATA[iptv]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Mon, 27 Jan 2020 00:04:55 GMT</pubDate><content:encoded><![CDATA[<p>A few months ago I wrote about how I had gained access to <a href="https://blog.docbert.org/hacking-82-hotels-at-once/">82 separate hotel IPTV systems within an hour or so</a>.</p><p>What I didn&apos;t mention in that post was that I actually discovered the issue around 6 months before that post was published, but had spent the intervening 6 months attempting to get the relevant parties to fix the problem. &#xA0;I took a 2-pronged approach to doing this - reaching out to both the vendor that provides the system, and the hotel that I had discovered the problem in.</p><p>The system itself is provided by an Australia company called <a href="http://www.ezestream.com.au/?ref=blog.docbert.org">Ezestream</a>, which also appears to go by the name Movielink. &#xA0;Despite multiple emails and even attempts to communicate to a few of their people via LinkedIn I was unable to even get a response from Ezestream, so I gave up on that fairly quickly.</p><p>The hotel property themselves were far more responsive, and very quickly put me in touch with someone from the Marriott security team, although from there things went downhill quickly.</p><p>Marriott&apos;s initial response was that before they would look at the issue I had to join their Bug Bounty program on HackerOne and submit the issue there, and that after I did so they would &quot;approve it&quot; (whatever that means?). I pushed back initially on the grounds that their HackerOne program only covers issues on their website, which this was not, but they basically refused to communicate with me or take any action unless I did.</p><p>I eventually gave in and submitted the issue via HackerOne, including the full draft version of the blog post, as well as specifics not including in the post such as IP address, usernames, passwords, screenshots of exploits, and a few other random vulnerabilities in the system that I had found.</p><p>2 weeks later I was contacted again and told that the issue was fixed! &#xA0;In fact, it was actually fixed a week previously, so only about a week after it was reported! &#xA0;Great News! &#xA0;I was also told that they couldn&apos;t confirm it was fixed as I hadn&apos;t provided IP addresses of the systems in question (which I had), so they were simply willing to take the word of the vendor that it had been fixed. &#xA0;The report was closed, and the issue marked as resolved.</p><p>Except, of course, the issue wasn&apos;t fixed. The relevant systems were still accessible from the internet. &#xA0;The exact exploits that I had previously provided (as simple one-liner curl statements no less) still worked as before. The root-level SSH keys that had been publicly available were still in use across all 80+ properties. &#xA0;It did appear that the non-password protected Samba share had been removed, and a few default passwords for the web interface had been changed (but the bug that allowed authentication to be bypassed hasn&apos;t been fixed), but little more seemed to have changed.</p><p>Over the next few months Marriott made multiple claims that they were still looking into the issues and, working with the vendor, but after that they went quiet and haven&apos;t made any updates since August!</p><p>Fast forward to almost 6 months after the problem was first reported and it did appear that the issue had finally been resolved! &#xA0;None of the systems appear to be accessible via the internet, and although I hadn&apos;t received any confirmation of a fix from Marriott (nor for that matter, any updated in the past few months) I decided to finally publish the blog post describing the issue which had been sitting in draft for a few days short of 6 months!</p><hr><p>A weeks or so later &#xA0;I happened to be in Singapore - a city where a few hotels that used the Ezestream system were located - so I decided to pick one of those properties to stay in for a night whilst I was there.</p><p>Sure enough, the TV system in use was from Ezestream, and not surprisingly connecting my laptop to the Ethernet cable going to the back of the TV gave full access to the local system, with all of the same bugs that had previously been reported (including those that allowed bypassing authentication). The root-level SSH keys that had previously been used were still active. In short, the security of the local system didn&apos;t appear to have been improved at all in the preceding 7 months.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2020/01/image-6.png" class="kg-image" alt loading="lazy"></figure><p>What&apos;s more, the links to the Ezestream server and all other properties were all still active, and still using the same keys. I was trivially able to jump over to the server in the properly in New Zealand where I had originally discovered the issue, or any other property in the system. &#xA0;It appears that in the end their fix was not to stop access to their servers, but to simply to limit which IP&apos;s can access them to those where their systems are located. This might be a half-way valid solution, if not for the fact that the networks in the properties running these systems are publicly accessible by virtue of the fact that they run to the network port on the back of the TVs in every room!</p><hr><p>As I said at the start of my previous post, &quot;Security is hard to get right&quot;. &#xA0;It&apos;s easy to see how a chain like Marriott could fail to notice security issue like these on a 3rd-party managed system running inside multiple of their properties. &#xA0;But there is absolutely no excuse for not taking quick and decisive action once they are notified of such issues. Taking over 5 months to make even rudimentary changes that don&apos;t actually solve the issue is unacceptable, as is taking the word of a vendor that such a problem has been fixed without confirming it (or even better, doing a full audit/pen-test on the system) - especially given that this was reported to them right around the same time that they were being fined US$124 million for a previous security issue!</p><p>Even today, over 7 months after the issue was reported, the HackerOne incident is still only in the &quot;Triaged/Open&quot; state, with no progress for several months...</p><!--kg-card-begin: html--><div class="image-div" style="width: 694px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2020/01/image-5.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Hacking 82 Hotel TV Systems in 82 Minutes or Less]]></title><description><![CDATA[<p>Security is hard to get right, there&apos;s no question about that. &#xA0;But sometimes you come across things that are just so simply insecure that it&apos;s clear nobody put <em>any</em> thought into security - and there&apos;s no excuse for that, especially when it&apos;</p>]]></description><link>https://blog.docbert.org/hacking-82-hotels-at-once/</link><guid isPermaLink="false">657fa1afcc19c60001cb05bc</guid><category><![CDATA[hacking]]></category><category><![CDATA[iptv]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Tue, 03 Dec 2019 03:55:54 GMT</pubDate><content:encoded><![CDATA[<p>Security is hard to get right, there&apos;s no question about that. &#xA0;But sometimes you come across things that are just so simply insecure that it&apos;s clear nobody put <em>any</em> thought into security - and there&apos;s no excuse for that, especially when it&apos;s done by a company that claims to &quot;specialise in [...] Networking Technologies&quot;.</p><p>I&apos;ve blogged before about some of my digging into Hotel IP-based TV systems, where with minimal effort I was able to obtain everything from basic access right up to <a href="https://blog.docbert.org/hacking-iptv/">full, remote access to the hotels entertainment systems</a>. &#xA0;(4 years later that particular system is still up and running, and still remotely accessible with default passwords)</p><p>However when staying in a hotel in New Zealand recently I came across what could only be described as the motherlode - full root access not just to that hotels TV systems, but also to the equivalent systems in 81 other hotels. &#xA0;And all of it accessible over the internet!</p><h2 id="iptv">IPTV</h2><p>I always love walking into a hotel room for the first time and seeing &quot;Hello Mr Howard&quot; on the powered-on TV screen. &#xA0;It means that the hotel is using some form of intelligent TV system - one that can turn the TV on or off on-demand, and display custom content. &#xA0;In the case of this particular hotel in New Zealand, there was no Set-top box, but just an IP-based TV. &#xA0;In general these TV&apos;s themselves are fairly secure and uninteresting, which normally limits what can be found, but still leaves the potential for an insecure server driving them.</p><p>After a few seconds of network snooping I had the IP address of the server, and from the network snooping alone I could tell this was a system I&apos;d come across before and not had any luck with, so I wasn&apos;t expecting much. &#xA0;A nmap on the server showed a myriad of open ports - everything from the expected 22 (SSH) and 80/443 (used by the TV, if nothing else), through to a number of more interesting things like 3306 (MySQL), 2049 (NFS) and 139/445 (CIFS), as well as a number of seemingly random ports like 81, 82 and 5555.</p><p>Ports 80 and 443 as expected both had web servers running on them, but with no default page and no indication of valid paths other than those that I&apos;d see on the network capture before (which were just TV-Server communication). MySQL and NFS also turned out to be uninteresting (no access and nothing shared respectively), but scanning CIFS showed a number of shares available :</p><!--kg-card-begin: html--><div class="image-div" style="width: 512px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic1.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><p>Including one that allowed guest access :</p><!--kg-card-begin: html--><div class="image-div" style="width: 471px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic2.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><p>Not surprisingly given the &quot;back&quot; in the name of the share, this contained a number of dated directories that contained backup copies of what appeared to be the entire server. &#xA0;However very surprisingly, it didn&apos;t contain backups for the server in this hotel, but instead about 70 other unrelated hotels around the APAC region!</p><!--kg-card-begin: html--><div class="image-div" style="width: 608px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic3.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><p> Digging through one of these backups turned out to be a goldmine in terms of understanding the system, and gave URL paths to access the IPTV system itself, to a copy of phpMyAdmin (a web-based MySQL admin tool), password files (both system and application level), as well many other interesting looking things.</p><p>phpMyAdmin, with default MySQL credentials of course, gave access to the MySQL data, and some tables that included user credentials. &#xA0;To their credit, these were hashed. Not so much to their credit, most of the hashes were trivially reversed with a simple Google search - not surprising given that they were the same as the username in most cases...</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic4.png" class="kg-image" alt loading="lazy"></figure><p>These passwords could then be used to access the two main web GUI systems on the box - one called &apos;NTV-Admin&apos; and a second called &apos;ADS&apos;.</p><h2 id="ntv-admin">NTV-Admin</h2><p>NTV-admin turned out to be a fairly typical Hotel TV admin GUI, and the password gathered above gave full admin access to it. This allowed access to a number of things such as hotel occupancy (128 rooms checked in) :</p><!--kg-card-begin: html--><div class="image-div" style="width: 946px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic5.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><p>Listings of all of the movies available, plus the ability to add/delete movies, change prices, etc (not as interesting as it seems given that all movies were free to start with!) :</p><!--kg-card-begin: html--><div class="image-div" style="width: 896px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic6.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><p>View details of which rooms were checked in, who was checked into them, and even when they had checked in :</p><!--kg-card-begin: html--><div class="image-div" style="width: 792px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic7.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><p>And even a historic record for each room, including all guests checking in and out for the past &gt;12 months - this time even with first names (masked) :</p><!--kg-card-begin: html--><div class="image-div" style="width: 921px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic8.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><p>Plus the ability to remotely control the TV, or even send a message to be displayed on a single TV, or all TVs in the hotel.</p><!--kg-card-begin: html--><div class="image-div" style="width: 729px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic9.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><p></p><h2 id="ads">ADS</h2><p>However it is the &apos;ADS&apos; system that turned out to be far more interesting. &#xA0;I&apos;m still not entirely sure what &apos;ADS&apos; stands for, but it was the first few items on the menu that looked most interesting - &quot;Root options&quot; and &quot;Cross-site Management&quot; :</p><!--kg-card-begin: html--><div class="image-div" style="width: 931px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic11.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><h3 id="root-options">Root options</h3><!--kg-card-begin: html--><div class="image-div" style="width: 310px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic12.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><p>You can can probably already guess what an option called &quot;Linux Console&quot; under a menu with &quot;Root&quot; in it does, but for completeness...</p><!--kg-card-begin: html--><div class="image-div" style="width: 573px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic13.png" class="kg-image"></figure><p></p></div><!--kg-card-end: html--><h2 id="from-1-to-81"><strong>From </strong>1 to 81</h2><p>A quick look at the Linux level showed a number of ssh processes being used to open TCP tunnels between this system and a master system at the softwares vendor, as well as some scripts to facilitate backing up between different customers - which explains the backup files that started it all. &#xA0;This also showed that it was possible to <strong>jump between the systems at various customer sites</strong> via the vendors central system. &#xA0;Not surprisingly, the systems all seemed to use the <strong>same passwords</strong> and the <strong>same root SSH keys</strong> for access, with all of the relevant private keys being stored on the system with no passphrase.</p><p>ie, access to one system gives full access to ALL of the other systems, located at other hotels around the world!</p><p>Coming back to the &apos;ADS&apos; web interface and going to &quot;Cross-site Management&quot; gives us a list of all of those systems - with a total of 82 systems showing a &quot;cloud_online&quot; date within the last few hours, implying they were actively connected (there were a dozen others that had not connected for some time, including some demo systems) :</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic14.png" class="kg-image" alt loading="lazy"></figure><p>Remember those TCP tunnels I mentioned above? &#xA0;Well clicking on the links in the table above takes us to a URL that points to the vendors system, with a unique port for each system - which then forwards over that SSH-forwarded TCP tunnel back to the relevant system. &#xA0;ie, <strong>every single one of these systems is accessible via the internet, with the traffic being forwarded by the system vendor! </strong>What&apos;s more, after attempting to access the system they show up in the &quot;Recent&quot; list, where in addition to the forwarded URL being listed, for many systems so is a &quot;direct_access&quot; URL which provides access to the system directly without having to route through the vendor.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/pic15.png" class="kg-image" alt loading="lazy"></figure><p></p><h2 id="summary">Summary</h2><p>There&apos;s so much wrong here it&apos;s hard to know where to start. &#xA0;There&apos;s the obvious security issues such as :</p><ul><li>Default passwords used everywhere</li><li>Same SSH keys used on all systems</li><li>Many systems directly accessible via the Internet</li><li>ALL systems accessible over the internet via the vendors system</li><li>All access over unencrypted HTTP.</li><li>No additional authentication to obtain root access on the systems.</li><li>CIFS shares containing backup data (including passwords/shadow files/etc) with guest access</li></ul><p>And that&apos;s really just scratching the surface. &#xA0;However there&apos;s the potentially bigger issues that each customers system seem to be being used as backup targets for others customers - many of which would be competitors. This data, which is stored without any form of encryption and (in at least the case of this system) zero authentication, could be mined to find out things like occupancy rates for a competing hotel given it contains checkin/checkout data.</p>]]></content:encoded></item><item><title><![CDATA[XtremIO X2 Snapshots - REST API]]></title><description><![CDATA[<p>(If you&apos;re not familiar with the XtremIO REST API I&apos;d suggest first <a href="https://blog.docbert.org/using-the-xtremio-rest-api-part-1/">reading my posts on using it</a>, in particular <a href="https://blog.docbert.org/using-the-xtremio-rest-api-part-3/">part 3 where I covered Snapshots</a>)</p><p></p><p>I <a href="https://blog.docbert.org/xtremio-x2-snapshots/">wrote previously</a> about some of the changes to Snapshots in X2.</p><p>A number of corresponding changes were also made</p>]]></description><link>https://blog.docbert.org/xtremio-x2-snapshots-rest-api/</link><guid isPermaLink="false">657fa1afcc19c60001cb05bb</guid><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Thu, 06 Jun 2019 08:58:35 GMT</pubDate><content:encoded><![CDATA[<p>(If you&apos;re not familiar with the XtremIO REST API I&apos;d suggest first <a href="https://blog.docbert.org/using-the-xtremio-rest-api-part-1/">reading my posts on using it</a>, in particular <a href="https://blog.docbert.org/using-the-xtremio-rest-api-part-3/">part 3 where I covered Snapshots</a>)</p><p></p><p>I <a href="https://blog.docbert.org/xtremio-x2-snapshots/">wrote previously</a> about some of the changes to Snapshots in X2.</p><p>A number of corresponding changes were also made to the REST API, in particular from version 6.1 which introduced version 3 (/v3/) of the REST API. &#xA0;In particular, a new &apos;command&apos; concept was added for many snapshot operations, where operations like taking a new snapshot are now done as a PUT to the relevant /command/ URL, rather than as a POST on the snapshot object.</p><p>As previously, the REST API guide is still the best place to look for all available options, but be aware that there&apos;s 2 distinct API guides - one for API version 2.x, and a second for version 3.x. &#xA0;In order to find the commands below - which are the ones you should be using on an X2 array - you&apos;ll need to look in the version 3.x guide.</p><h3 id="creating-a-repurpose-copy">Creating a Repurpose Copy</h3><p>In the v2 API, creating a repurpose copy was done by making a POST operation against the snapshots object. &#xA0;This changes in v3 to being a <strong>PUT</strong> operation against the &#xA0;URI <strong>/api/json/v3/commands/consistency-groups/create-repurpose-copy</strong>. As previously, the details of what you want to take a copy of, new object names, etc, are all passed as options in the body of the request.</p><p>For example, to take a new snapshots of the Consistency Group &quot;OracleProd&quot;, putting the new snapshots into a Linked Consistency Group called &quot;OracleSnap1&quot; we would use :</p><pre><code>[scott ~]$ curl -X PUT -d &apos;{&quot;from-consistency-group-id&quot;:&quot;OracleProd&quot;,&quot;new-consistency-group-name&quot;:&quot;OracleSnap1&quot;}&apos; -u admin:Xtrem10 -k -s https://xms/api/json/v3/commands/consistency-groups/create-repurpose-copy

{
    &quot;content&quot;: {
        &quot;volumes&quot;: [
            {
                &quot;href&quot;: &quot;https://xms/api/json/v3/types/volumes/75b5c5ec2e114e2d972ff7709826b97a&quot;,
                &quot;index&quot;: 57,
                &quot;guid&quot;: &quot;75b5c5ec2e114e2d972ff7709826b97a&quot;,
                &quot;name&quot;: &quot;OracleProd1.1559727279785&quot;
            },
[...  trimmed ...]
        ],
        &quot;consistency-group&quot;: {
            &quot;href&quot;: &quot;https://xms/api/json/v3/types/consistency-groups/02a8c28800d44f63990e97d63c3f9d80&quot;,
            &quot;index&quot;: 1,
            &quot;guid&quot;: &quot;02a8c28800d44f63990e97d63c3f9d80&quot;,
            &quot;name&quot;: &quot;OracleSnap1&quot;
        }
    }
}
</code></pre><p>The response includes details of both the Linked Consistency Group that is created, as well as entries for each of the new snapshot volume(s) created by the operation.</p><h3 id="creating-a-protection-copy">Creating a Protection Copy</h3><p>Taking a Protection Copy (a read-only snapshot) is fundamentally the same as the above only using a different URI - <strong>/api/json/v3/commands/consistency-groups/create-protection-copy</strong></p><h3 id="refreshing-a-consistency-group">Refreshing a Consistency Group</h3><p>Refreshing a CG (or Linked CG), either from another CG or from a Snapshots Set, uses a PUT request to the URI <strong>/api/json/v3/commands/consistency-groups/refresh-data</strong>, once again as a PUT request with the relevant options passed in as the request body.</p><p>eg, to refresh the Linked CG we created above, we would use :</p><pre><code>[scott ~]$ curl -X PUT -d &apos;{&quot;from-consistency-group-id&quot;:&quot;OracleProd&quot;,&quot;to-consistency-group-id&quot;:&quot;OracleSnap1&quot;,&quot;no-backup&quot;:&quot;true&quot;}&apos; -u admin:Xtrem10 -k -s https://xms/api/json/v3/commands/consistency-groups/refresh-data
[scott ~]$</code></pre><p>Note that this time there is no response - the fact that the query completes successfully is our indication that the refresh completed.</p><h3 id="other-operations">Other Operations</h3><p>Other operations, such as adding volumes to a Consistency Group, are similar. &#xA0;The details of the command URLs for these are all covered in the REST API guide.</p>]]></content:encoded></item><item><title><![CDATA[XtremIO X2 Snapshots]]></title><description><![CDATA[<p>I&apos;ve written previously about using snapshots on XtremIO, however things have changed a little with XtremIO X2 so it&apos;s time for an update.</p><h4 id="snapshot-types">Snapshot Types</h4><p>Previously when taking a snapshot you could elect to take either a &quot;read-only&quot; or &quot;read-write&quot; snapshot. The</p>]]></description><link>https://blog.docbert.org/xtremio-x2-snapshots/</link><guid isPermaLink="false">657fa1afcc19c60001cb05ba</guid><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Wed, 05 Jun 2019 10:39:17 GMT</pubDate><content:encoded><![CDATA[<p>I&apos;ve written previously about using snapshots on XtremIO, however things have changed a little with XtremIO X2 so it&apos;s time for an update.</p><h4 id="snapshot-types">Snapshot Types</h4><p>Previously when taking a snapshot you could elect to take either a &quot;read-only&quot; or &quot;read-write&quot; snapshot. The result was basically the same, with the obvious difference that the snapshot volumes in one were read-only, whilst the others were read-write.<br>In X2 these terms have changed, and we now have a &quot;Protection Copy&quot; (equivalent of a read-only snapshot), and a &quot;Repurpose Copy&quot; (read-write snapshot). &#xA0;To some extent this is just a name change, however there have also been related changes to the management concepts for these snapshots in order to both simply them as well as make them more flexible.</p><h3 id="snapshot-groupings">Snapshot Groupings</h3><p>Other than the new names, the biggest change with X2 is the introduction of a new grouping concept, the &quot;Linked Consistency Group&quot;. &#xA0;The existing groupings of &quot;Consistency Group&quot; and &quot;Snapshot Set&quot; still exist, although the use of the latter has changed a little.</p><p>In X2, these three concepts are used as follows :</p><ul><li>Consistency Group - A manually created group of volume (or potentially snapshots) that is used as the SOURCE of a snapshot operation. &#xA0;This is basically the same as previously, although a few operations have changed to add more flexibility.</li><li>Linked Consistency Group - A system-created group of snapshots that is the result of a &quot;Repurpose Copy&quot; operation (ie, a read-write snapshot).</li><li>Snapshot Set - A system-created group of snapshots that is the result of a &quot;Protection Copy&quot; operation (ie, a read-only snapshot).</li></ul><p>So basically the difference from X1 is that when creating a read-write (Repurpose) copy, instead of a Snapshots Set being created, you now get a Linked Consistency Group. &#xA0;This Linked Consistency Group functions basically the same as a standard Consistency Group, with the &quot;Linked&quot; part of the name referring to the fact that there is a relationship between it and the CG from which it was created which allows refer operations to occur between them.</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/06/cg1.png" class="kg-image" alt loading="lazy"></figure><p>Creating a read-only (Protection) copy still creates a snapshots set, which allows you to manage these separately from copies that you intend to actively use.</p><h3 id="refreshing-snapshots">Refreshing Snapshots</h3><p>Refreshing snapshots remains mostly unchanged, except that most refresh operations now will be between Consistency Groups (rather than Snapshot Sets). &#xA0;It is still possible to refresh from a Snapshot Set (when going from a read-only Protection Copy to a CG), however in the GUI you&apos;ll need to click on &quot;Enable Advanced Mode&quot; to see that as an option.</p><h3 id="restoring-snapshots">Restoring Snapshots</h3><p>Restoring snapshots remains unchanged, however as this is sometimes a misunderstood concept it&apos;s worth restating how &quot;restore&quot; works.<br>Restore is basically a limited version of Refresh. &#xA0;Where Refresh allows you to refresh from any copy of a snapshot, Restore only allows you to restore from a copy that is both :</p><ul><li>Read Only (ie, a &quot;Protection Copy&quot;), AND</li><li>A direct snapshot from the volume being restored</li></ul><p>Having these additional restrictions allows you to know that what you&apos;re restoring is an immutable (unchanged) copy of the volume that you&apos;re restoring to - not something that has potentially been modified (as a read-write snapshot could have been), or that is actually a copy of a different snapshot that has itself been modified.</p><p>If you need to &quot;restore&quot; from something that doesn&apos;t meet these rules (eg, maybe you accidentally took a read-write Repurpose Copy when you meant to take a read-only Protection Copy), then you can still do it - you just need to instead use a &apos;Refresh&apos; operation which will give exactly the same result, but without the additional restrictions.</p><h3 id="adding-volumes-to-a-consistency-group">Adding Volumes to a Consistency Group</h3><p>Adding volumes to an existing CG has changed significantly from X1, based on customer request. &#xA0;In X1, you could easily add a volume to a CG, however there was no way to create the equivalent snapshot other than refreshing the snapshot set - which might not be what you wanted to do at that stage.</p><p>In X2, we&apos;ve made adding the snapshot to the Linked CG a manual operation, which gives you to flexibility to do it whenever you want. &#xA0;Thus to add a new volume to a CG the steps are :</p><ol><li>Add the volume itself to the CG. In the GUI this can be done from the CG screen by selecting the CG and then going to Manage -&gt; Add/Remove Volumes</li><li>Select the Linked Consistency Group that you want to add a snapshot of the volume to, and then go to Manage -&gt; Add Paired Volume</li><li>Select the Reference Consistency Group. ie, the one you added the new volume to above</li><li>Select the new volume that you added to that reference CG</li><li>Select &apos;Create New Pair Volume&apos;. &#xA0;This will automatically create the new snapshot volume for you. &#xA0;If you&apos;d prefer, you could have created the snapshot manually first, and then used &quot;Pair Existing Volume&quot; instead - but in most cases that just added extra effort for no benefit!</li><li>If required, map the newly created snapshot to the host that the Linked CG is mapped to so that it&apos;s available once you do a refresh.</li></ol><p>Note that it&apos;s not possible to add a new volume to a Snapshot Set (ie, a Protection Copy) - and there&apos;s no need to do so! &#xA0;Snapshot sets are a point in time copy, so adding a new volume simply isn&apos;t needed. &#xA0;When the next Protection Copy is taken, it will automatically include all of the volumes in the CG at that time, which will include the new volume.</p><h3 id="cli-changes">CLI Changes</h3><p>I generally don&apos;t recommend doing Snapshot operations from the CLI - it&apos;s easier to use the GUI, or better to use the RestAPI if you&apos;re automating something - however for completeness there are a number of new snapshot commands in the CLI that replace the old commands. &#xA0;(The old commands still exist, but should not be used with X2 - they are only there for backwards compatibility!). &#xA0;These commands include :</p><ul><li>create-protection-copy</li><li>create-repurpose-copy</li><li>refresh-data</li><li>restore-data</li><li>add-volume-to-consistency-group</li><li>add-copy-volume-to-consistency-groups</li></ul><h3 id="rest-api-changes">Rest API Changes</h3><p>The Rest API has also been changed fairly significantly - more on that in a future post...</p>]]></content:encoded></item><item><title><![CDATA[United Airlines Bug Bounty Program]]></title><description><![CDATA[<!--kg-card-begin: html--><p>
Around 4 years ago United Airlines launched a <a href="https://www.united.com/ual/en/us/fly/contact/bugbounty.html" target="_blank">&quot;Bug Bounty&quot;</a> program.  Bug bounty programs are becoming more and more common as a way of companies rewarding people for reporting security issues that they discover on their website.  Those rewards generally take the form of a cash payment, or</p>]]></description><link>https://blog.docbert.org/united-airlines-bug-bounty/</link><guid isPermaLink="false">657fa1afcc19c60001cb05b8</guid><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Tue, 28 May 2019 18:41:54 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: html--><p>
Around 4 years ago United Airlines launched a <a href="https://www.united.com/ual/en/us/fly/contact/bugbounty.html" target="_blank">&quot;Bug Bounty&quot;</a> program.  Bug bounty programs are becoming more and more common as a way of companies rewarding people for reporting security issues that they discover on their website.  Those rewards generally take the form of a cash payment, or sometimes just recognition for having discovered and reported the flaw.<!--kg-card-end: html--></p><p>United&apos;s Bug Bounty program is unique in that instead of paying cash for problems detected, they pay awards in United Mileage Plus frequent flyer miles - with payments of between 25,000 and 1,000,000 miles depending on the severity of the bug discovered.</p><h4 id="bug-bounty-submission-1">Bug Bounty Submission 1</h4><p>A little over 2 years after United launched their Bug Bounty program I came across a flaw in their website that I considered worthy of the program, and subsequently submitted it to them. (You can find <a href="https://blog.docbert.org/united-airlines-information-exposure/">details of that issue in a separate blog post</a>). &#xA0;After a few weeks, United denied my submission, claiming that the website was operating correctly.</p><blockquote>&quot;<em>After looking into the issue we have determined this finding is not a valid security concern. The values apparently are required for other service calls. They are transmitting the information securely</em>&quot; - Ben from United&apos;s Bug Bounty Program</blockquote><p>Anyone with any idea of security would know that this response made no sense as the issue clearly is a security flaw, however I decided not to bother following up any further, and vowed to write a blog post instead describing the issue.</p><h4 id="bug-bounty-submission-2">Bug Bounty Submission 2</h4><p>It wasn&apos;t until around 18 months later that I finally got around to writing a post describing the issue I&apos;d discovered and submitted. &#xA0;Given that so much time had passed I decided I should first give United another chance to acknowledge that this was a bug, so after confirming that the problem still existed I made another submission to the Bug Bounty program, including both this same issue as well as a second separate but related issue. &#xA0;To my surprise, they admitted that both of my submissions were valid bugs, but that the first issue I&apos;d (re-)submitted was not covered by their Bug Bounty program as it was a &quot;<em>previously-known bug that the development team is currently addressing</em>&quot;.</p><p>Even when I pointed out that I had previously reported this bug over 18 months ago - presumably well before it was a known issue - they refused to budge and claimed that my submission was not valid.</p><h4 id="public-disclosure">Public Disclosure</h4><p>Given that over 18 months had passed since I had submitted the bug and it had not been fixed (their own T&amp;C&apos;s state that &quot;Our desired timeframe to remediate each valid submission is within 90 days ...&quot;) - and because I had already written the blog post describing the issue - I decided to <a href="https://blog.docbert.org/united-airlines-information-exposure/">publicly disclose the information leakage issue</a>, whilst not disclosing the second issue they had accepted as a new/previously unknown issue. &#xA0;To their credit, United did act quickly and fixed the issue within 24 hours of me posting it.</p><p>A few days later I received an email from United&apos;s Bug Bounty team pointing out that as per their T&amp;C&apos;s, publicly disclosing any issues discovered would disqualify me from receiving any miles as payment for discovering the issue - with the clear implication being that I should remove my blog post. &#xA0;Apparently the fact that they had already denied my submission and thus I wouldn&apos;t be receiving any miles anyway didn&apos;t occur to them at this stage...</p><h4 id="change-of-face">Change of Face</h4><p>After hearing nothing for a few weeks, out of the blue I received an email stating that as a result of the issue I&apos;d reported now being fixed, as a &quot;valid submitter&quot; I was going to receive a prize of 100,000 United Miles. &#xA0;However before they could pay the prize I would have to take down my blog post describing the (now fixed!) issue.</p><p>It&apos;s hard to know what to make of this. &#xA0;On one hand I can see this as them coming to their senses and recognizing that my original submission was valid and deserved to be awarded - even though they have never actually stated that fact. That being the case, the T&amp;C&apos;s of the bug bounty program around disclosure obviously need to be followed, and thus the blog post should go.</p><p>However it&apos;s just as easy to look on this as a bribe to remove my blog post. Take down the details of the flaw, and we&apos;ll pay you 100,000 miles...</p><hr><h1 id="the-real-problem-with-united-s-bug-bounty">The REAL Problem with United&apos;s Bug Bounty</h1><p>Despite the issues described above, the real problem I have with United&apos;s Bug Bounty isn&apos;t their process, but their payment. &#xA0;The novelty of using &quot;Miles&quot; as payment sounds like a cool idea for an Airline, but ignores one major problem - <strong>the tax consequences</strong>.</p><p>In addition to getting 100,000 miles, United would also be supplying me with a 1099-MISC tax form, valuing the miles at 2 cents per mile, or $2,000 total. &#xA0;As a California resident, that amounts to around a $900 tax liability for me, so in effect my &quot;free&quot; 100,000 miles would actually be costing just under 1 cent each.</p><p>Of course, paying tax on Bug Bounty payments isn&apos;t uncommon. &#xA0;The difference is that in most cases the payment itself would be in cash, so the tax liability could be offset against the payment, and the result would always be net positive. &#xA0;With miles, that&apos;s not possible - the IRS won&apos;t allow me to give up 45,000 of my miles as payment of my resulting tax bill - they&apos;ll want cold hard cash!</p><h3 id="what-s-a-mile-worth">What&apos;s a Mile Worth?</h3><p>So the question becomes, what is a &quot;Mile&quot; really worth, and is it worth paying $900 to get 100,000 of them. &#xA0;It&apos;s a difficult question to answer given that miles don&apos;t have a clear cash value. There are a number of ways to use miles, from flights to merchandise to gift cards, so lets have a look at each of those.</p><h4 id="gift-cards">Gift Cards</h4><p>Starting with the simplest one, and the one nearest to cash value - Gift cards. &#xA0;United has an <a href="https://giftcards.mileageplus.com/?ref=blog.docbert.org">entire website</a> where you can buy gift cards with your miles. The prices vary a little depending on which shop the gift cards is for, but in general a $100 gift card will cost you around 15,600 miles, meaning that 100,000 miles would give you around $640 in gift cards.</p><p>Let me say that again. &#xA0;100,000 miles, which would result in a tax bill of over $900, could be turned into gift cards worth around $640, leaving me $260 out of pocket. &#xA0;Clearly not a good option.</p><h4 id="merchandise">Merchandise</h4><p>United also provides the ability to <a href="https://www.mileageplusawards.com/?ref=blog.docbert.org">purchase merchandise with Miles</a> - an option they send me a catalog in the mail for at least 3-4 times a year.</p><p>Their current &quot;best selling&quot; product is apparently a pair of <a href="https://www.mileageplusawards.com/bose-qc35-series-ii-wireless-noise-cancelling-headphones-2?ref=blog.docbert.org">Bose QC35 headphones</a>, which are available for only 48,100 miles. These headphones have an RRP from Bose of $350, valuing 100,000 miles at just over $700 - still around $200 less than the tax bill. Other items are all similar values - such as an Amazon Echo for 13,600 miles ($99.99 on Amazon, or 0.74 cents/mile)</p><h4 id="flights">Flights</h4><p>Calculating the value of miles for flights is difficult, as it can vary dramatically depending on where and when you&apos;re flying. &#xA0;For example, I recently flew from San Francisco to Palm Springs - a ticket that would have cost $264 if I&apos;d paid cash, but instead cost me only 10,000 miles - giving a value of 2.64 cents/mile (or around $2,640 for 100,000 miles).</p><p></p><p>In this case I was lucky that a &quot;cheap&quot; redemption option was available - and even then it was only available because I have status with United. &#xA0;If I&apos;d booked a day later, or if I hadn&apos;t had United status, the price would have been 32,500 miles for that same $264 flight, giving a valuation of only 0.8 cents/mile ($800 for 100,000 miles - back to less than the tax bill!)</p><p>It&apos;s certainly possible to get some great value for flights using Miles, especially when flying Business or First Class on &apos;Saver&apos; awards &#xA0;- although again it&apos;s hard to put a value that especially given that such flights can be very difficult to come across. &#xA0;Even picking a relatively expensive route such as San Francisco to Sydney return in Business Class, booked a month or so in advance will cost around $5000+tax when paying cash, or 400,000 miles - still only giving a value of about 1.25 cents/mile.</p><p>However for me personally, using miles presents a further challenge which is that as a very frequent flyer, I already have over 2 million miles on United and their partner airlines - enough to last me 5 to 10 (or more) years of flying. &#xA0;Accepting an additional 100,000 miles is going to give me a tax burden today, for miles that I realistically won&apos;t be using for another 10 or more years!</p><p>And all of this ignores the fact that United reserves the right to clear out your Mileage balance for any one of a number of reasons including &#xA0;simply not earning/using your miles for 18 months - giving those 100,000 miles a value of exactly $0 (but still with a $900 tax bill!)</p><h2 id="fixing-the-problem">Fixing The Problem</h2><p>The simple fact is that the &quot;novelty&quot; value of United&apos;s Bug Bounty payment method turns what is otherwise a reasonably strong program into what for many people will be at best a farce, and potentially even a very expensive experience!</p><p>In the Terms and Conditions for the program, United does call out the potential for tax implications, but even there they get it wrong :</p><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/03/bbtax.png" class="kg-image" alt loading="lazy"></figure><p>The term &quot;at a rate of 2% per mile added to your annual earnings&quot; simply makes no sense - this is likely a typo for &quot;2 cents per mile&quot;, but if it is then it&apos;s a typo that has existed since the start of the program despite others having called them out on it.</p><p>United needs to either determine a way to provide a non-cash prize without tax implications, or at least give the option of a cash alternative. &#xA0;Being offered $2,000 cash instead of miles that are apparently worth $2,000 resolves all of the issues described above, as it allows the tax implications to be paid out of the prize itself, rather than being an additional burden on the recipient.</p>]]></content:encoded></item><item><title><![CDATA[United Airlines Mileage Plus/Points.com Information Disclosure]]></title><description><![CDATA[<p>(Update: Less than 24 hours after posting this blog entry United Airlines fixed the problem described, and a password is now required to access the site in question. &#xA0;It&apos;s a pity it took over 18 months and eventually public disclosure to get the issue fixed, but at</p>]]></description><link>https://blog.docbert.org/united-airlines-information-disclosure/</link><guid isPermaLink="false">657fa1afcc19c60001cb05b7</guid><category><![CDATA[united airlines]]></category><category><![CDATA[security]]></category><category><![CDATA[travel]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Fri, 22 Feb 2019 03:37:18 GMT</pubDate><content:encoded><![CDATA[<p>(Update: Less than 24 hours after posting this blog entry United Airlines fixed the problem described, and a password is now required to access the site in question. &#xA0;It&apos;s a pity it took over 18 months and eventually public disclosure to get the issue fixed, but at least it is now resolved!)</p><p>Ever left a boarding pass in the seat-back pocket on your flight? &#xA0;Or perhaps dropped it whilst on the shuttle to the rental car center? &#xA0;How about the baggage label on your checked bag - ever thrown that in the bin without a second thought?</p><p> I&apos;m sure most of us have at some time, but would you do that if you knew that boarding pass or baggage tag allowed someone to access all of your Mileage Plus personal information, such as your home address, phone number, email address and even your Mileage Plus point balance, date you opened your Mileage Plus account, and whether or not you own a United-branded Chase credit card?</p><p>Of course, none of this information is included on the boarding pass, but due to an information disclose bug on one of United&apos;s partners websites, it&apos;s possible to access all of this information using only 2 pieces of information that are on every boarding pass - a surname, and a frequent flyer number.</p><p>(There is a second issue that allows you to also obtain this information using only the surname and ticket confirmation number, however United has at least committed to fixing this issue. As the confirmation number is also on the baggage tag it&apos;s possible to use that to get this information as well).</p><p>To their credit, several years ago United Airlines did start &quot;masking&quot; the frequent flyer number on boarding passes (eg, writing it as AB-***123), however every boarding pass still contains a large 2D barcode, which contains all sorts of interesting information about your ticket, including your full Frequent Flyer number (plus your surname, but that is printed in clear on the boarding pass anyway).</p><p>Armed with the Surname and Mileage Plus number, it&apos;s then off to one of United&apos;s branded websites that is actually run by points.com, <a href="http://buymiles.mileageplus.com/?ref=blog.docbert.org">buymiles.mileageplus.com</a> &#xA0;As the name implies, this website allows you to buy additional Mileage Plus miles, or transfer them to family members. &#xA0;In order to &quot;authenticate&quot; to this website, the only details you need to enter are - you guessed it - a surname and a Mileage Plus number. &#xA0;Of course, this wouldn&apos;t be a problem if all you could do would be to purchase Miles (using your own credit card!) for the person you&apos;re logged in as - however as a part of the &quot;login&quot; process the website actually leaks information about the user whose details have been entered.</p><p>The first two piece of information are blatantly displayed on the post-login page - your current Mileage Plus balance, and a masked email address. &#xA0;Not good, but not too bad either.</p><!--kg-card-begin: html--><div class="image-div" style="width: 850px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/02/mp1.png" class="kg-image"></figure><p></p></div>	<!--kg-card-end: html--><p>The issue isn&apos;t so much with what is shown, as what isn&apos;t. &#xA0;During the authentication process, a number of other details are passed to and from the websites involved VIA THE BROWSER. &#xA0;This means it&apos;s simply a matter of looking at the data being passed to determine multiple other details about the account :</p><!--kg-card-begin: html--><div class="image-div" style="width: 395px;"><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2019/02/mp2.png" class="kg-image"></figure><p></p></div>	<!--kg-card-end: html--><p>So very quickly we&apos;ve gone from having only the users surname and Mileage Plus number (or even just their boarding pass!) to having their home address, phone number, whether or not they have a United credit card, the date their account was created, mileage balance, and their United Mileage Plus status (4=1K, 3=Platinum, etc).</p><p>If nothing else, this information would leave the the user open to a very targeted phishing attack - especially given that we also have details of a recent flight they took (based on the boarding pass).</p><h2 id="disclosure-to-united-airlines">Disclosure to United Airlines</h2><p>This issue was reported to United via their &quot;Bug Bounty&quot; program in July of 2017. After spending around a week investigating the issue they responded that :</p><p><em>&quot;After looking into the issue we have determined this finding is not a valid security concern. The values apparently are required for other service calls. They are transmitting the information securely.&quot;</em></p><p>It is now 18 months later and the issue has not been fixed.</p><p>The information disclosed is data that would be considered Personally Identifiable Information (PII) in most jurisdictions, and in particular would fall under GDPR in the EU, so it&apos;s a little hard to understand why United considered this not to be a &quot;valid security concern&quot;.</p>]]></content:encoded></item><item><title><![CDATA[Snapshot Changes in XtremIO X2]]></title><description><![CDATA[I've written previously about how Snapshots (and snapshot refresh) works in XtremIO.  With XtremIO X2 we changed the way a few of the snapshot-related concepts work in order to simplify things, as well as to add some additional use cases that were not possible previously.]]></description><link>https://blog.docbert.org/snapshot-changes-in-xtremio-x2/</link><guid isPermaLink="false">657fa1afcc19c60001cb05b6</guid><category><![CDATA[xtremio]]></category><category><![CDATA[snapshot]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Wed, 17 Oct 2018 22:21:37 GMT</pubDate><content:encoded><![CDATA[<p>I&apos;ve written previously about how <a href="https://blog.docbert.org/xtremio-snapshot-refresh/">Snapshots (and snapshot refresh) works in XtremIO</a>. &#xA0;With XtremIO X2 we changed the way a few of the snapshot-related concepts work in order to simplify things, as well as to add some additional use cases that were not possible previously.</p><h4 id="xtremio-x1-snapshots">XtremIO X1 Snapshots</h4><p>First, some history. As per the blog post listed above, taking a snapshot on X1 always resulted in a &quot;Snapshot Set&quot; being created. &#xA0;This Snapshot Set could then be used to create a new snapshot, or as a part of a snapshot refresh. &#xA0;A &quot;Consistency Group&quot; in X1 was only used as the source of a snapshot or refresh. &#xA0;Refresh was allowed between Consistency Group and Snapshot Sets, or between Snapshots Sets and other Snapshot Sets (as long as they were both taken from the same CG tree).</p><p>When taking a snapshot with X1, you had the choice of a read-only or a read-write snapshot. &#xA0;The two acted basically the same, except that a read-only snapshot couldn&apos;t be refreshed (and obviously, was read-only when presented to a host).</p><h4 id="xtremio-x2-snapshots">XtremIO X2 Snapshots</h4><p>In X2, the general terms remain the same. &#xA0;We still have &quot;Consistency Groups&quot; and we still have &quot;Snapshot Sets&quot;, but their use is a little different.</p><p>A Consistency Group is still a group of one or more volumes. &#xA0;It&apos;s now used for more than just snapshots (such as Native Replication and QoS), but fundamentally it&apos;s similar to in X1.</p><p>Snapshot Sets still exist, but their use has changed a little, and they are now generally only used for &quot;Protection&quot; copies.</p><p>The most obviously change when you come to take a snapshot of a Consistency Group is that the term &quot;snapshot&quot; has been removed from the GUI, and instead you have the option of a &quot;Protection Copy&quot; or a &quot;Repurpose Copy&quot;, which correspond to a read-only snapshot and a read-write snapshot respectively. &#xA0;However there are more differences between the two than just that...</p><h5 id="protection-copy">Protection Copy</h5><p>As the name implies, a protection copy creates a copy of your volumes that can be used for &quot;protection&quot; - a read-only &quot;backup&quot;-style copy of your volumes. &#xA0;As with in previous versions, taking a protection copy of a CG will result in a Snapshot Set being created for the resulting volumes, and you will be able to specify the name of that Snapshot Set during the process.</p><!--kg-card-begin: html--><div class="image-div" style="width: 504px; margin: 0 0 20px; margin: auto;"><!--kg-card-end: html--><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2018/10/protection.png" class="kg-image" alt loading="lazy"></figure><!--kg-card-begin: html--></div><!--kg-card-end: html--><p>As we&apos;re taking a Protection Copy, the resulting volumes will be read-only (displayed as &quot;Read-Access&quot; in the GUI)</p><!--kg-card-begin: html--><div class="image-div" style="width: 1170px; margin: 0 0 20px; margin: auto;"><!--kg-card-end: html--><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2018/10/protection2.png" class="kg-image" alt loading="lazy"></figure><!--kg-card-begin: html--></div><!--kg-card-end: html--><p>As this is a read-only copy, there is no option to refresh the snapshot - refreshing really only make sense when the volumes have been mapped to a host and you want to update the data on them, which generally won&apos;t be a how a protection copy is used. &#xA0;If you want to update a protection copy, you can just take a new protection copy of the volumes.</p><h5 id="repurpose-copy">Repurpose Copy</h5><p>A Repurpose Copy is a copy of data you are going to &quot;repurpose&quot; for a new use, such as a development copy. As a result, it&apos;s a full read-write copy of the volumes, and is exactly equivalent (at the volume/snapshot level) to taking a read-write snapshot in previous versions.</p><p>The big difference between Repurpose Copy and a read-write snapshot is that instead of the resulting snapshots being grouped into a Snapshot Set as in previous versions, they are instead put into a new Consistency Group, or what is sometimes referred to as a &quot;Linked Consistency Group&quot; on the grounds that it is &quot;linked&quot; to the original CG that it was taken from.</p><!--kg-card-begin: html--><div class="image-div" style="width: 498px; margin: 0 0 20px; margin: auto;"><!--kg-card-end: html--><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2018/10/repurpose.png" class="kg-image" alt loading="lazy"></figure><!--kg-card-begin: html--></div><!--kg-card-end: html--><p>After creating the Repurpose Copy we will have 2 Consistency groups - the one we took the snapshot of (<em>OracleProd</em>), and the new one (<em>OracleDev1</em>) containing the newly created read-write snapshots.</p><!--kg-card-begin: html--><div class="image-div" style="width: 1171px; margin: 0 0 20px; margin: auto;"><!--kg-card-end: html--><figure class="kg-card kg-image-card"><img src="https://blog.docbert.org/content/images/2018/10/repurpose2.png" class="kg-image" alt loading="lazy"></figure><!--kg-card-begin: html--></div><!--kg-card-end: html--><h5 id="refreshing-a-snapshot">Refreshing a Snapshot</h5><p>Repurpose copies can be refreshed just like read-write snapshot sets could be previously - just select the Consistency Group you want to refresh, and select &quot;Refresh Selected&quot; from the Repurpose menu.</p><p>When doing a refresh, you will by default only be given the option to refresh FROM another Consistency Group - either the original source CG, or another repurpose copy. &#xA0;If instead you want to refresh from a read-only Protection Copy it&apos;s just matter of selecting the &quot;Enable advance mode&quot; checkbox during the wizard at which point you&apos;ll also be given the option of refreshing from a Snapshot Set.</p><p>In X2 when refreshing a Consistency Group, the name of the CG is NOT changed as a part of the process. &#xA0;This is different to X1, where the name of the snapshot set was changed with each refresh, and additional steps - such as renaming the resulting snapshots set - were often required to keep the name the same. &#xA0;This makes programmatically refreshing a snapshot much easier than before, as you will always be refreshing the same objects, without the need to use tags or rename the resulting objects as was required with X1.</p><h4 id="cli">CLI</h4><p>New CLI commands have been added to match the above operations :</p><p><code>create-protection-copy</code> - Creates a new read-only Protection Copy, along with a new Snapshot Set</p><p><code>create-repurpose-copy</code> - Creates a new repurpose copy, along with a new (Linked) Consistency Group</p><p><code>refresh-data</code> - Refreshes a read-write snapshot (Consistency Group) from either a another consistency group (original volume or another repurpose copy) or from a snapshots set (protection copy)</p><h4 id="rest-api">REST API</h4><p>There have been a number of changes to the REST API, so I&apos;ll cover those in another post (coming soon...)</p><h4 id="backwards-compatibility">Backwards Compatibility</h4><p>As a part of these changes, we wanted to maintain backwards compatibility for any CLI or REST API scripts that people were using on X1 systems.</p><p>In order to do this, X2 systems will act in a &apos;backwards compatible&apos; mode if the older X1 CLI or REST API commands are used to create or manipulate snapshots. &#xA0;For example, the X1 &apos;create-snapshots&apos; CLI command continues to exist and acts exactly as it did in X1 - a new snapshots set will be created even when creating a read-write snapshot (and NOT a new consistency group as would be created when using the new create-repurpose-copy command). &#xA0;The same is true for the &apos;create-snapshot-and-reassign&apos; command that was previously used for refreshing snapshots, which continues to work exactly as it did previously.</p><p>In order to get this backwards compatibility, it&apos;s important to use ONLY the old-style commands with these copies, and not jump between the old and new mechanisms. &#xA0;Where this can be a little difficult is when creating the initial snapshot. &#xA0;Many people will use the GUI to create a snapshot, but then use the CLI or REST API to refresh it (normally as a part of an automated workflow). This will NOT work with X2, as the GUI will follow the new mechanism for creating the snapshot (resulting in a CG being created rather than a snapshots set). &#xA0;The workaround is simply to create the original snapshot using the create-snapshot CLI command rather than using the GUI, in order to get the old-style functionality.</p><!--kg-card-begin: html--><div class="image-div" style="width: 788px; margin: 0 0 20px; margin: auto;"><!--kg-card-end: html--><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.docbert.org/content/images/2018/10/bc1.png" class="kg-image" alt loading="lazy"><figcaption>Creating a read-write snapshot using the X1-style commands</figcaption></figure><!--kg-card-begin: html--></div><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Querying Performance Data from the XtremIO REST API]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>(If you&apos;re not already famaliar with using the XtremIO REST API, then I&apos;d suggest reading <a href="https://blog.docbert.org/using-the-xtremio-rest-api-part-1/">my series on using it</a> first)</p>
<p>XtremIO has a very powerful API for accessing performance data from the XMS, but it can be a little difficult to get your head around.</p>]]></description><link>https://blog.docbert.org/querying-performance-data-from-the-xtremio-rest-api/</link><guid isPermaLink="false">657fa1afcc19c60001cb05b5</guid><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Tue, 05 Jun 2018 19:10:30 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>(If you&apos;re not already famaliar with using the XtremIO REST API, then I&apos;d suggest reading <a href="https://blog.docbert.org/using-the-xtremio-rest-api-part-1/">my series on using it</a> first)</p>
<p>XtremIO has a very powerful API for accessing performance data from the XMS, but it can be a little difficult to get your head around.  The <a href="https://support.emc.com/search/?text=restful+api&amp;product_id=31109&amp;resource=DOC_LIB&amp;ref=blog.docbert.org">REST API Guide available on support.emc.com</a>, does a good job of documenting the options, but not really how to use them.</p>
<h3 id="understandingxtremioperformancedata">Understanding XtremIO Performance Data</h3>
<p>Before accessing the data from the API, it helps to understand a little about how performance data is collected and stored by XtremIO.</p>
<p>Every 5 seconds the XMS attempts to collect hundreds of performance metrics from the array - not just at the array level, but also at the level of things like Targets, Initiators, and Volumes.</p>
<p>This is obviously a lot of data, so over time the data is consolidated to less frequent intervals. The initial &quot;raw data&quot; is kept of 3 days. One minute data is kept for 7 days, ten minute data for 30 days, and so on. (See the XtremIO Users Guide for full details)</p>
<p>What is interesting is that this consolidation is done soon after the data is collection, not once the data reaches the expiry time for the previous timeframe (eg, 3 days for the 5 second raw data). Thus for (say) data collected yesterday the system will have data available at each of &quot;raw data&quot; (5 seconds), &quot;1 minute&quot;, &quot;10 minute&quot;, &quot;1 hour&quot; and &quot;1 day&quot; granularity, After 3 days, the raw data will be deleted, but the other points will return. After 7 days, the 1 minute data will be deleted, and so on.</p>
<p>As we&apos;ll see shortly, when querying data we can specify which of the granularities we want to use, or we can let the system decide for us.</p>
<p>When the consolidation occurs, the system will maintain 3 data points for each time period - a minimum, a maximum, and an average. However for the initial raw data it will have only a single value (which is in itself generally an average over the 5 seconds collection iterval)</p>
<h3 id="queryingthedata">Querying the Data</h3>
<p>All performance queries are made using a single URI - /api/json/v2/types/performance</p>
<p>At a minimum you need to pass an &quot;entity&quot; parameter to specify what you which type of object you want performance data for. The list of possible entities is contained in the REST API guide, but you can also get a list by passing in an invalid entity  (eg, .../performance?entity=A) which will return a response containing the full list :</p>
<pre><code>{
    &quot;message&quot;: &quot;Command Syntax Error: entity property must have one of the following values: [SnapshotGroup, Initiator, Target, VolumeTag, XEnv, DataProtectionGroup, Volume, Cluster, Tag, InitiatorGroup, InitiatorGroupTag, SSD, TargetGroup, Xms]&quot;, 
    &quot;error_code&quot;: 400
}
</code></pre>
<p>Given that many of these entities return a LOT of data by default, lets start with one of the more simple ones - &quot;XEnv&quot; which will return performance data for the &quot;X Environments&quot;, otherwise know as the CPUs!</p>
<p>Sending a request for /api/json/v2/types/performance?entity=XEnv  (note that the entity name IS CASE SENSITIVE, so in this case the X and E need to be upper case!) returns a JSON response which basically consists of 2 sections.  The first is a number of &quot;counter&quot; entries that look like :</p>
<pre><code>&quot;counters&quot;: [
    [
        1513555200000, 
        &quot;529d231911a84b0a879a557c250abcd4&quot;, 
        &quot;X1-SC1-E1&quot;, 
        1, 
        2.2815487587808998
    ], 
    [
        1513555200000, 
        &quot;33ba4bf1e2ea4175905098bdc9a825c2&quot;, 
        &quot;X1-SC1-E2&quot;, 
        2, 
        2.1723940081082902
    ], 
</code></pre>
<p>Secondly is a &quot;members&quot; entry :</p>
<pre><code>&quot;members&quot;: [
    &quot;timestamp&quot;, 
    &quot;guid&quot;, 
    &quot;name&quot;, 
    &quot;index&quot;, 
    &quot;avg__cpu_usage&quot;
], 
</code></pre>
<p>We need to use the two of these together - the multiple &quot;counters&quot; entries include the actual data, whilst the (single) members entry provides the order of the fields in the counters.</p>
<p>Thus for the example above we&apos;ve got a data point with a timestamp of 1513555200000, for the XEnv with the name &quot;X1-SC1-E1&quot; (X-brick 1, Storage Controller 1, Environment/CPU 1), for which the performance counter &quot;avg__cpu_usage&quot; has a value of 2.2815487587808998</p>
<p>We&apos;ve also got a second data point with the same timestamp, but for the XEnv &quot;X1-SC1-E2&quot; (CPU2), with an &quot;avg__cpu_usage&quot; value of 2.1723940081082902.  There were of course hundreds of other data points returned - in part because we didn&apos;t specify a time frame for the query!</p>
<p>The name of the data field &quot;avg__cpu_usage&quot; actually tells us that we were NOT seeing the raw data in this result, but instead one of the consolidated values, and specifically the &quot;average&quot; value (not minimum or maximum for that consolidated period).  We&apos;ll see below how to get the raw data or the min/max values.  The double-underscore in the name is used to indicate the &quot;avg&quot; comes from the XMS&apos;s consolidation of the data, and not an average value from the array itself.</p>
<h3 id="limitingtheresults">Limiting the Results</h3>
<p>By default, the number of data points returned can be very large - especially if you&apos;re not specifying a time range (which we&apos;ll cover in a moment).</p>
<p>You can filter the returned results in 2 separate ways - either by the entities (eg, one or more specific volumes), or by the properties returned (eg, bandwidth, iops, latency, etc).</p>
<p>To filter by entity you use the &quot;entity-name=XXX&quot; option on the URL.  Multiple of these can be used in the same query to return data for multiple entities.</p>
<p>eg, to get data just for the volume &quot;MyVol1&quot; you could use :</p>
<pre><code>/api/json/v2/types/performance?entity=Volume&amp;entity-name=MyVol1
</code></pre>
<p>To get data for three separate volumes, simply list them all :</p>
<pre><code>/api/json/v2/types/performance?entity=Volume&amp;entity-name=MyVol1&amp;entity-name=MyVol2&amp;entity-name=MyVol3
</code></pre>
<p>Even with only a few entities the results can still be very large. For example, &quot;Volume&quot; entities return around 24 different performance metrics for each time point, including bandwidth (avg__bw), read bandwidth (avg__rd_bw), write bandwidth (avg__wr_bw), IOPS, read IOPS, write IOPS, etc</p>
<p>Most likely you only need a few of those, so you can specify which properties you want returned using the &quot;prop=XXX&quot; option.  The best way to use this is to do a query without it to see the full list of properties, and then specify just the ones you want.  As before, you can pass multiple &quot;prop&quot; options to get multiple properties returned.</p>
<p>eg, to get the average bandwidth and IOPS for both MyVol1 and MyVol2 :</p>
<pre><code>/api/json/v2/types/performance?entity=Volume&amp;entity-name=MyVol1&amp;entity-name=MyVol2&amp;prop=avg__bw&amp;prop=avg__iops
</code></pre>
<p>Note that there are a few properties returned regardless of whether you ask for them or not, such as timestamp, name, index and guid.</p>
<h3 id="filteringbytime">Filtering by Time</h3>
<p>As mentioned above, the XMS consolidates data over time, with data being available for up to 2 years (although at a low granularity - one point per day!), or as frequently as every 5 seconds.</p>
<p>When it comes to selecting what data we&apos;re interest in there&apos;s a few different options that play a part.</p>
<p>The first is &quot;time-frame&quot;, which can be one of four specific ranges &quot;last_hour&quot;, &quot;last_day&quot;, &quot;last_week&quot; or &quot;last_year&quot;, or alternatively &quot;real_time&quot; or &quot;custom_time&quot;.</p>
<p>When using custom_time, either or both of &quot;from-time&quot; and &quot;to-time&quot; can be specified to control the range - if either (or both) parameters are skipped then the time of the oldest (from-time) or newest (to-time) data available will be used.</p>
<p>from/to-time are specified in GMT time, in the format &quot;YYYY-MM-DD hh:mm:ss&quot;.  Note that&apos;s a space in the middle, which is not a valid character to have in a URL.  If the software you&apos;re using to make the request doesn&apos;t automatically do so, you should replace it with &quot;%20&quot; which is the encoded equivalent of a space (Space is ASCII code 32 decimal, which is 20 hex).</p>
<p>In addition to &quot;time-frame&quot; you can also specify which level of data consolidation, or &quot;granularity&quot; you want the data from.  ie, whether you want the &quot;raw&quot; data (5 second), &quot;one_minute&quot;, &quot;ten_minute&quot;, &quot;one_hour&quot; or &quot;one_day&quot; values.  Of course, not all data is available for all time ranges, so if you ask for a &quot;time_frame&quot; of &quot;one_year&quot; with a &quot;granularity&quot; of &quot;raw&quot;, you&apos;ll actually end up getting only the past 3 days worth of data - as that is all that is available for the &quot;raw&quot; setting.</p>
<p>It&apos;s also possible to pass a &quot;granularity&quot; of &quot;auto&quot;, in which case the system will automatically determine the granularity based on the time range you&apos;ve specified.</p>
<h3 id="aggregationtype">Aggregation Type</h3>
<p>The final parameter you may want to use is &quot;aggregation-type&quot;.  By default, when querying data that has been consolidated (ie, everything except &quot;raw&quot; data) you will be given the average over the time period, and the name of any fields that have been aggregated with be prefaced with &quot;avg__&quot;  (eg, avg__iops).  You can instead specifically specify that you want the &quot;min&quot;, &quot;max&quot; or &quot;avg&quot; values - and as with the other options you can specify multiple if needed.</p>
<pre><code>/api/json/v2/types/performance?entity=Volume&amp;entity-name=MyVol1&amp;aggregation-type=min&amp;aggregation-type=max&amp;aggregation-type=avg
</code></pre>
<p>This will give you results that include all of min__bw, max__bw and avg__bw (and of course the same for every other property returned!)</p>
<p>This also leads to one of the quirks of the performance API.  If you end up querying the &quot;raw&quot; data - either because you specifically specify it (with granularity), or because you specify a time-frame where the system picks that data automatically (eg, last-hour), then the data isn&apos;t aggregated.  This means that there are no min/max/avg values, but more importantly it means that the property names returned do NOT include the avg__ at the start like they do when querying other time frames.</p>
<p>eg, if you query XEnv performance data for the past hour, the response will have the following entries :</p>
<pre><code>&quot;members&quot;: [
    &quot;timestamp&quot;, 
    &quot;guid&quot;, 
    &quot;name&quot;, 
    &quot;index&quot;, 
    &quot;cpu_usage&quot;
],
</code></pre>
<p>But if you instead query for the data for the past day (with no other changes), you will receive :</p>
<pre><code>&quot;members&quot;: [
    &quot;timestamp&quot;, 
    &quot;guid&quot;, 
    &quot;name&quot;, 
    &quot;index&quot;, 
    &quot;avg__cpu_usage&quot;
],
</code></pre>
<p>There&apos;s 2 potential ways to handle this - either explicitely specify the granularity of the data you want to receive, or programatically handle the avg__ prefix on the results if it exists.</p>
<h3 id="nullvalues">&quot;null&quot; values</h3>
<p>There is one other thing to be aware of when using &quot;raw&quot; data, which is that although the system attempts to collect data every 5 seconds, for various reasons it occasionally fails.  eg, there might be a network issue between the XMS and the array that causes the data collection to fail.</p>
<p>In this case, when querying the raw data you will still receive entries, however the value will be &quot;null&quot;.  For example, for a XEnv query where no data is available you&apos;ll see :</p>
<pre><code>   [
        1527861190000, 
        &quot;529d231911a84b0a879a557c250abcd4&quot;, 
        &quot;X1-SC1-E1&quot;, 
        1, 
        null
    ], 
</code></pre>
<p>There is a second occasion that &quot;null&quot; will be returned.  If a volume or a snapshot has never been mapped to a host (ie, it doesn&apos;t have an NAA assigned), then the array doesn&apos;t generate performance data for it - as it can&apos;t have any!  In this case, the data for both the raw data and the consolidated values will be &quot;null&quot;</p>
<pre><code>[
        1527938480000, 
        &quot;a3ea0523b56a49c4a200eb04070bb210&quot;, 
        &quot;NewVol1&quot;, 
        4, 
        null, 
        null, 
        null, 
        null, 
        null,
</code></pre>
<p>These entries should generally be ignored as invalid (or at best, uninteresting) data points.</p>
<h3 id="puttingitalltogether">Putting it all together</h3>
<p>Lets say I wanted to get the average IOPS and Bandwidth for my two Oracle volumes for the first week of May 2018, with one data point per hour.  So we&apos;ve got :<br>
<strong>entity=volume</strong>  because I&apos;m looking for data on a volume<br>
<strong>entity-name=Oracle1&amp;entity-name=Oracle2</strong>   as those are my 2 Oracle volume names<br>
<strong>prop=avg__bw&amp;prop=avg__iops</strong>  because these are the only two properties I&apos;m interested in<br>
<strong>time-frame=custom_time&amp;from-time=2018-05-01 00:00:00&amp;to-time=2018-05-07 23:59:59</strong>  (Don&apos;t forget to replace the spaces with %20 if needed!)<br>
<strong>granularity=one_hour</strong></p>
<p>Giving me a full query of</p>
<pre><code>/api/json/v2/types/performance?entity=volume&amp;entity-name=Oracle1&amp;entity-name=Oracle2&amp;prop=avg__bw&amp;prop=avg__iops&amp;time-frame=custom_time&amp;from-time=2018-05-01%2000:00:00&amp;to-time=2018-05-07%2023:59:59&amp;granularity=one_hour
</code></pre>
<p>If I wanted to get both the maximum AND average values, then I&apos;d need to add :<br>
<strong>prop=max__bw&amp;prop=max__iops</strong>  (In addition to the existing prop entries)<br>
<strong>aggregation-type=max&amp;aggregation-type=avg</strong> in order to get BOTH minimum and average</p>
<p>Giving :</p>
<pre><code>/api/json/v2/types/performance?entity=volume&amp;entity-name=Oracle1&amp;entity-name=Oracle2&amp;prop=avg__bw&amp;prop=avg__iops&amp;prop=max__bw&amp;prop=max__iops&amp;time-frame=custom_time&amp;from-time=2018-05-01%2000:00:00&amp;to-time=2018-05-07%2023:59:59&amp;granularity=one_hour&amp;aggregation-type=max&amp;aggregation-type=avg
</code></pre>
<h3 id="finalthoughts">Final Thoughts</h3>
<p>Whilst this may seem complex, once you get your head around the options it&apos;s actually fairly simple.  As with all REST API queries, the best option is to simply play around with some queries and see what you get back - either with a tool like Postman or HTTPRequester in Firefox, or even using something like cURL.  Try different options, and see what you get back. Use prop and entity-name to limit the number of entries.</p>
<p>There&apos;s also another option I haven&apos;t mentioned which is &quot;limit=X&quot; which can be used to limit the number of results returned.  Using this in a final query probably isn&apos;t a good idea, but it can be useful when learning and testing to make sure you don&apos;t get too much data back from a query.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[XtremIO X1 and VMware -  APD v’s PDL]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In general, when a Storage Controller is shutting down for any reason, XtremIO X1 will send a &quot;Logical Unit Not Supported&quot; SCSI response code (0x5/0x25) in response to any requests received by that storage controller from a connected host.  This could be just a single SC shutdown</p>]]></description><link>https://blog.docbert.org/xtremio-x1-and-vmware-esxi-apd-vs-pdl/</link><guid isPermaLink="false">657fa1afcc19c60001cb05b4</guid><category><![CDATA[xtremio]]></category><category><![CDATA[vmware]]></category><category><![CDATA[APD]]></category><category><![CDATA[PDL]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Tue, 24 Apr 2018 21:07:16 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In general, when a Storage Controller is shutting down for any reason, XtremIO X1 will send a &quot;Logical Unit Not Supported&quot; SCSI response code (0x5/0x25) in response to any requests received by that storage controller from a connected host.  This could be just a single SC shutdown (eg, during an upgrade, where SC&apos;s are rebooted one at a time), or during an entire array shutdown (eg, where power has been lost and all Storage Controllers are running on battery).</p>
<p>For most operating systems, this gives the behavior that we want - the Multipathing software on the host will mark that one path as down, and continue to use the other paths.  For a complete array shutdown (eg, power outage), the host will end up marking all of the paths down, but will continue to try using them and will recover the paths once the array is available again.</p>
<p>However VMware ESXi attempts to be a little more intelligent when it looses connectivity to a LUN in situations like this, and can actually put the device into one of two different states when it loses all paths to that device - &quot;All Paths Down&quot; (APD), or &quot;Permanent Device Loss&quot; (PDL). VMware have a <a href="https://kb.vmware.com/s/article/2004684?ref=blog.docbert.org">KB article</a> and a <a href="https://blogs.vmware.com/vsphere/2011/08/all-path-down-apd-handling-in-50.html?ref=blog.docbert.org">blog post</a> that cover some of the differences between these two states, but simplistically the difference is that with APD you expect the LUN to return at a later stage, whilst with PDL the expectation is that it will not return (as the name implies!)</p>
<p>In general, the only way to recover from a PDL state is to reboot the VMware host, which can obviously be a fairly intrusive action.  With APD, the devices will generally recover once at least one path to the device is recovered.</p>
<h4 id="apdoptioninxtremio">APD option in XtremIO</h4>
<p>Given the potential impact and extended recovery from LUNs going into PDL state, in XtremIO XIOS 4.0.10 we added the ability to change the SCSI codes that the array sends to VMware hosts during a Storage Controller shutdown.  Instead of sending a SCSI &quot;Logical Unit Not Supported&quot; response like we do for most OS&apos;es, we instead send a SCSI &quot;Transport Disrupted&quot; response, which has the same immediate impact on the VMware side (path is taken offline), however if all paths fail, VMware will put the devices into &quot;All Paths Down&quot; state, rather than PDL.</p>
<p>For backwards compatibility reasons we didn&apos;t change the default behavior, but instead added a new configuration setting to allow admins to select if they wanted to old (PDL) or new (APD) behavior.  The current setting can be viewed/changed using the &quot;show-clusters-parameters&quot; and &quot;modify-clusters-parameters&quot; commands from the XMS CLI :</p>
<div class="image-div" style="width: 1129px; margin: 0 0 20px">
<p><img src="https://blog.docbert.org/content/images/2018/04/scp.png" alt="scp" loading="lazy"></p>
</div>
<p>As these settings are for VMware ESXi only, they only apply to initiators that have been configured with an Operating System of &quot;ESX&quot;.  Initiators configured with any other OS, including the default of &quot;Other&quot; will still follow the old &quot;PDL&quot; behavour, regardless of what the clusters ESX Device Connectivity Mode is set to.</p>
<div class="image-div" style="width: 683px; margin: 0 0 20px">
<p><img src="https://blog.docbert.org/content/images/2018/04/init.png" alt="init" loading="lazy"></p>
</div>
<h4 id="defaultbehavior">Default Behavior</h4>
<p>As mentioned above, when we added this setting we did not change the default behavior.  All existing and new systems kept the &quot;PDL&quot; behavior unless it was explicitly changed.</p>
<p>In XIOS version 4.0.25-22 (and later) we changed the default behavior, but only for newly installed clusters.  Existing clusters still retain the PDL behavior unless they are changed, however any clusters installed with 4.0.25 or later will default to the APD behavior.</p>
<h4 id="vmwaremultipathingbugs">VMware Multipathing Bugs</h4>
<p>These changes were originally only really relevant for situations where the entire XtremIO cluster was shutdown, such as a planned outage or a power outage.  However over the past few years VMware ESXi Native Multipathing has had two bugs that have made these changes relevant for other situations as well.  The first of these was in ESXi 6.0u2, whilst the second was in <a href="https://kb.vmware.com/s/article/53519?ref=blog.docbert.org">specific patches across multiple versions of ESXi</a>.</p>
<p>In both of these situations, ESXi acts incorrectly when it receives a &quot;Logical Unit Not Supported&quot; response, with the end result that it can incorrectly fail all paths to the storage, resulting in a host-side Data Unavailability situation - even though the array is fully functional.  These issues can be triggered by many different arrays (generally Active/Active multipath arrays), including XtremIO X1.</p>
<p>Although these are bugs on the ESXi side, and they can be avoided by applying VMware patches (at the time of writing, the patches for the second issue are still pending), changing the ESXi path policy will also avoid triggering them as we will no longer send a &quot;Logical Unit Not Supported&quot; which is the only SCSI response that triggers this issue.</p>
<h4 id="bestpractice">Best Practice</h4>
<p>In general, the best practice for this setting would be the configure it to &quot;apd&quot;.</p>
<p>If you&apos;re running one of the affected VMware ESXi versions and are planning an XtremIO upgrade, then there&apos;s basically no choice - without changing this setting the upgrade will result in ESXi incorrectly deciding that all paths to the storage are down, and result in a host-side data unavailability state.</p>
<p>If you&apos;re not running one of those versions, or not planning an upgrade, it&apos;s still generally recommended to change to &quot;apd&quot;.</p>
<p>As this setting only affects what happens when a storage controller is being shutdown, there&apos;s no impact on the operation of the array when making that change.  If the &quot;Operating System&quot; setting for the initiators is not correctly set to &quot;ESX&quot; within XtremIO you&apos;ll also need to change that - which can also be done on the fly without any impact (similar to the APD/PDL setting, this setting only has impact when an SC is shutting down for some reason).</p>
<h4 id="howaboutx2">How about X2?</h4>
<p>XtremIO X2 acts differently when shutting down a storage controller.  For unplanned outages like power failure, the Storage Controller will immediately stop responding (rather than running on BBU&apos;s during the shutdown process like XtremIO X1 did).  As a result, the host will simple see the path drop, and will act accordingly.</p>
<p>For planned Storage Controller shutdowns (including upgrades), the behavior is similar - the array will drop the port, which the host will detect and multipathing will offline the path.</p>
<p>In both cases, this results in an &apos;APD-style&apos; behavior on ESXi, so whilst the mechanism is different, the end result is the same as the new behavior on XtremIO X1.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Hidden City Ticketing]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Recently there has been a lot of talk around a method of saving on airfares called &quot;Hidden City Ticketing&quot;.  Whilst Hidden City Ticketing can save money, there are a number of catches when it comes to using it - catches that can cause extreme difficulties and extra cost</p>]]></description><link>https://blog.docbert.org/hidden-city-ticketing/</link><guid isPermaLink="false">657fa1afcc19c60001cb05b3</guid><category><![CDATA[travel]]></category><category><![CDATA[hidden-city]]></category><category><![CDATA[airlines]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Fri, 20 Apr 2018 20:19:34 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Recently there has been a lot of talk around a method of saving on airfares called &quot;Hidden City Ticketing&quot;.  Whilst Hidden City Ticketing can save money, there are a number of catches when it comes to using it - catches that can cause extreme difficulties and extra cost if they are not fully understood.</p>
<p>If you are considering using Hidden City Ticketing then I would suggest reading and fully understanding the details below before you do to avoid getting badly burnt...</p>
<h3 id="sowhatisit">So What Is It?</h3>
<p>Lets say you wanted to fly from St Louis (STL) to Atlanta (ATL).</p>
<p>Only one airline flies that route directly - Delta, as Atlanta is one of their hubs.  Other airlines will of course allow you to fly from St Louis to Atlanta, but they will require you to have a stop somewhere. For example, with American Airlines you will need to fly via Charlotte, or with United Airlines via Chicago (ORD)</p>
<p>In a simplistic world, airlines would just charge for their flights based on the distance of the flight - but we all know that&apos;s not how airline prices work.</p>
<p>From a pricing perspective, Delta has the ability to charge a price premium for this flight.  The passengers have the advantage that they don&apos;t need to have a stopover anywhere, and thus will arrive earlier - and thus they will likely be able to charge more for that flight.  Thus if Delta was to charge $200 for that flight, United and American might need to charge $150 in order to get passengers to fly their more roundabout, and longer, routing.</p>
<p>But what if you don&apos;t actually want to go from St Louis to Atlanta, but you actually want to go to Chicago?  This is a similar situation as above, only reversed - Delta don&apos;t fly that route directly (only via Atlanta), whilst United Airlines and American have direct flights.  This time UA and AA get to charge the premium price (lets say $200 again), whilst Delta have to discount (say $180) in order to compensate for the stopover.</p>
<p>So now you&apos;ve got 3 options for flying from St Louis to Chicago.</p>
<ol>
<li>
<p>You can fly on United, and pay the $200 fare</p>
</li>
<li>
<p>You can fly on Delta, via Atlanta, at the discounted $180 fare</p>
</li>
<li>
<p>Or, you could book a ticket with United to fly from St Louis to Atlanta via Chicago for $150, but simply get off the plane in Chicago and not board your connecting flight!</p>
</li>
</ol>
<p>Option 3 is what is known as &quot;Hidden City Ticketing - you&apos;ve purchased a ticket from St Louis to Atlanta, but your real destination is the &quot;hidden city&quot; along the route, Chicago!</p>
<p>You&apos;ve managed to get the best routing (direct, with no stops!) for a price that was even cheaper than the indirect route with a stop.  Not surprisingly the airline isn&apos;t going to be too happy with you as you&apos;ve managed to get a flight they wanted to charge you $200 for, but only paid $150 for it - but what can they do, right?</p>
<h3 id="thedownsideofhiddencityticketing">The Downside of Hidden City Ticketing</h3>
<p>Turns out, there are a number of things they can do - and some of them can end very badly for you.</p>
<h4 id="futureflights">Future Flights</h4>
<p>The first thing most airlines will do when you fail to board a flight is that they will cancel the rest of the flights on your itinerary - including any return flights.  In your efforts to get a cheap STL-ORD return flight you actually booked STL-ORD-ATL, then ATL-ORD-STL on the same booking, then the moment you fail to board your ORD-ATL flight they will cancel not only that flight (which you never intended to take anyway!), but also the ATL-ORD-STL flights for the return - leaving you stranded in Chicago with no return flight (and no refund!)</p>
<p>The workaround for this is fairly simple - book everything as a one-way flight - with your outbound and return trips on separate bookings.  This can sometimes change the pricing dramatically (airlines often charge more for one-way flights than for returns), but it&apos;s really the only option.</p>
<p>This rule also means that you must be flying TO the hidden city, not FROM it.  If you&apos;re trying to fly from Chicago (ORD) to St Louis, then booking ATL-ORD-STL won&apos;t help you as once you fail to board the ATL-ORD flight the airline will cancel your ORD-STL booking as well!</p>
<h4 id="baggage">Baggage</h4>
<p>When you check bags for a flight, the airline will automatically send them to your final destination.  If you fail to board your second flight, one of two things will happen</p>
<p>For domestic flights in the US, your bags will carry on to the final airport - even if you don&apos;t. As a result, if you checked bags for your STL-ORD-ATL flight but didn&apos;t go beyond Chicago, your bags will still fly to Atlanta - and then it&apos;ll be your responsibility (and cost!) to get them back.  You can&apos;t expect any sympathy from the airline here, as they will likely know exactly what has happened...</p>
<p>For most other flights, there are laws that if you fail to board your flight then your bags must be removed.  Whilst this does mean your bags will stop in the intermediate airport (Chicago, in our example - even though that&apos;s not technically relevant as this would be a US domestic flight), in doing so the airline will have had to delay the flight to remove the bags - at a real cost to them, and an annoyance to everyone on the flight.  As a result, they are not going to do you any favors in actually getting your bags back to you, and at a minimum you can expect to be in for a significant delay whilst they do so.</p>
<p>So to put it simply, <strong>you can&apos;t check bags when using hidden city ticketing</strong> - but can only travel with carry-on bags.</p>
<p>If you travel light this might not sound like a big disadvantage, but there are countless things that can go wrong.  If the airline decides your bags are too large/heavy, or if the plane runs out of space in the overhead bins, then you might be forced to check your bag even if you don&apos;t want to.  At best, you will need to be very careful to make sure that this doesn&apos;t happen.</p>
<p>There used to be a workaround for this issue, which was known as &quot;short checking&quot;, where the airline would allow you to check your bag only as far as an intermediate airport.  Thus even though you were (apparently) flying to Atlanta, you could ask them to only check your bag as far as Chicago.  In recent years most airlines have removed the ability to short check a bag - mainly because people were using it as a way to get around the baggage problem with hidden city ticketing!  Whilst you can still ask, it&apos;s an almost certainty that you will be refused.</p>
<h4 id="irrops">IRROPS</h4>
<p>IRROPS (short for IRregular OPerationS) is airline speak for something going wrong.  It could be a flight being canceled or delayed due to bad weather, a mechanical issue, or even an oversold flight that they need to move passengers off.</p>
<p>When you buy a ticket from St Louis to Atlanta via Chicago, the airlines obligation is to get you to Atlanta.  There is no requirement for them to actually fly you on the route you&apos;ve originally booked - although obviously they generally would if everything is going to plan.</p>
<p>If your St Louis to Chicago flight is cancelled, the airline might simply move you to a later flight on the same route - or they might move you to a completely different routing - such as St Louis to Newark to Atlanta, or even move you to another airlines flight such as putting you on the direct Delta St Louis to Atlanta flight.</p>
<p>Of course, as you were actually only headed to Chicago this leaves you with a problem - but it&apos;s one that you really can&apos;t do much about!  Whilst you can certainly ask to be moved back to a St Louis-&gt;Chicago-&gt;Atlanta routing, there&apos;s absolutely no requirement for the airline to comply, and even if they do they will likely give you far lower priority than any passengers actually booked to Chicago so it could be hours or even days before you can actually get on a flight!</p>
<h4 id="legalities">Legalities</h4>
<p>For the most part, there is nothing illegal about using Hidden City Ticketing.  It is generally against the contract you have with the airline (the one that came along with your ticket but which of course nobody ever bothers to read!), most of which state that you can&apos;t buy a ticket you don&apos;t intend to fly.  Of course, it&apos;s near impossible for the airline to prove you never intended to take the second flight (and didn&apos;t simply change plans halfway through), so unless you become a serial offender you&apos;re likely not going to have problems there.</p>
<p>That said, there are situations where there could be legal implications when travelling internationally.  For example, say you had bought a ticket from the US to New Zealand via Australia, with an intent to stay in Australia. As you were booked as a transit passenger through Australia, the various Australian Arrival taxes would not have been paid with your ticket, as these are not changed for transit passengers - but they would be due as you are no longer transiting to New Zealand.  This could potentially cause difficulties for either you or the airline.</p>
<h4 id="visas">Visas</h4>
<p>Another issue that can occur with international itineraries is that of visa requirements.  For example, it&apos;s not uncommon to find cheap hidden city flights from the US to Europe, with Russia being the final destination (for example, San Francisco -&gt; Paris -&gt; Moscow).</p>
<p>However in order to be allowed board even the first leg of this flight, you will need to have a visa for the final destination - Russia.  It doesn&apos;t matter that you are planning to only fly only as far as Paris, you will not be allowed board the plane in San Francisco without a Russian visa.</p>
<h3 id="internationaltodomestic">International-to-Domestic</h3>
<p>One of the few routes where you can generally avoid the issues above are <em>some</em> international flights, connecting to a domestic flight, where your real destination (the hidden city) is the first point of arrival from the international flight.</p>
<p>For example, say you want to fly from Singapore to San Francisco.  Adding an additional flight to somewhere in the US could cause the price to drop (eg, Singapore -&gt; San Francisco -&gt; Reno)</p>
<p>As is the case in many (but not all!) countries, the US requires you to collect your bags at the first point of entry into the country - in this case San Francisco. Despite the fact your baggage is tagged all the way to your final destination, you would normally need to collect it in San Francisco and then give it back to the airline for your connecting flight - but in general there is nothing stopping you simply taking your luggage and leaving the airport.</p>
<p>However even then you need to be careful as the rules can and do vary from country-to-country, and even airport-to-airport.  For example, whilst it&apos;s possible to do what is described above in San Francisco, it is NOT possible at Washington Dullas airport, as connecting passengers (and their bags) are sent to a separate immigrations/customs area where there is no choice but to re-check your bag!</p>
<h3 id="soisitworthit">So Is It Worth It?</h3>
<p>Obviously different people will have different answers to this, but in my opinion, Hidden City Ticketing is NOT worth the effort or trouble it can cause.  Yes, there is the opportunity to save some money, but in most cases the risk of something going wrong outweigh the potential savings.</p>
<p>If you know you&apos;ll be travelling with only a small amount of carry-on luggage (ie, something that can fit under the seat in-front of you), on a route that has a low chance of IRROPS/delays, and are willing to fight to get what you want if something does go wrong, then it might be worth your effort - but in most cases the potential for problem outweighs the advantages.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[REST API Scripting with curl]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;ve stated previously that using curl for accessing the XtremIO REST API (or any API!) isn&apos;t ideal as it&apos;s difficult to handle errors returned by the API.</p>
<p>Below is a BASH function that can help with this. It takes the options for a REST</p>]]></description><link>https://blog.docbert.org/rest-api-scripting-with-curl/</link><guid isPermaLink="false">657fa1afcc19c60001cb059f</guid><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Fri, 30 Mar 2018 11:08:38 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I&apos;ve stated previously that using curl for accessing the XtremIO REST API (or any API!) isn&apos;t ideal as it&apos;s difficult to handle errors returned by the API.</p>
<p>Below is a BASH function that can help with this. It takes the options for a REST API call, and returns an error if either the curl/network/etc side of things fails, OR if the REST API returns an error.</p>
<p>Using it is as simple as calling the function with the relevant parameters, and then checking the return value to see if the API call was successful or not.</p>
<p>eg :</p>
<pre><code>Response=$(API admin Xtrem10 GET https://xms.example.com/api/json/v2/types/volumes)
</code></pre>
<p>The full API function, along with an example, is included below.  You can also get it <a href="https://www.docbert.org/blog/API?ref=blog.docbert.org">from here</a></p>
<pre><code>#!/bin/bash

#
# The API function will initiate a REST API call to the XtremIO XMS with
# error checking.
#
# Usage :
#    API username password req_type URL req_body
#
#      username - The username to use to authenticate to the XMS (eg, admin)
#      password - The password to use to authenticate to the XMS
#      req_type - One of GET, POST, PUT or DELETE
#      URL      - The full URL to the resources
#      req_body - For POST/PUT, the JSON-formatted body of the request
#
# The function returns 0 if the request is successful, and the JSON response
# from the array (if any) is send to stdout, otherwise it returns non-zero.
#
API() {

        local TMPFILE=`mktemp`
        local BODY=&quot;&quot;

        if [ $# -eq 5 ]; then
                BODY=&quot;-d ${5}&quot;
        elif [ $# -ne 4 ]; then
                echo Invalid number of arguments past to API
                return 99
        fi

        local RESP=$(curl -s --output $TMPFILE --write-out &quot;%{http_code}&quot; -X ${3} ${BODY} -u ${1}:${2} -k -s ${4})

        if [ $? -ne 0 ]; then
                RESP=$?
        elif [ $RESP -ge 200 ] &amp;&amp; [ $RESP -le 299 ];  then
                RESP=0
        else
                RESP=1
        fi

        cat $TMPFILE
        rm $TMPFILE

        return $RESP
}

Response=$(API admin Xtrem10 POST https://xms.example.com/api/json/v2/types/snapshots &apos;{&quot;from-consistency-group-id&quot;:&quot;CG1&quot;,&quot;to-snapshot-set-id&quot;:&quot;SS1&quot;,&quot;no-backup&quot;:&quot;true&quot;}&apos;)

if [ $? -eq 0 ]; then
        echo SUCCESS.  Response was :
        echo $Response
else
        echo ERROR.  Error was :
        echo $Response
fi

</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[XtremIO Snapshots and Veritas Volume Manager]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>It&apos;s been 20 years almost to the day since I started using Veritas Volume Manager, or Sun Enterprise Volume Manager (SEVM) as the re-branded version from Sun was called at the time.  SEVM used to be included free with all Sun SPARCstorage Array 110 model arrays (an external</p>]]></description><link>https://blog.docbert.org/xtremio-snapshots-and-vxvm/</link><guid isPermaLink="false">657fa1afcc19c60001cb05b2</guid><category><![CDATA[xtremio]]></category><category><![CDATA[vxvm]]></category><category><![CDATA[snapshot]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Sun, 24 Dec 2017 00:47:26 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>It&apos;s been 20 years almost to the day since I started using Veritas Volume Manager, or Sun Enterprise Volume Manager (SEVM) as the re-branded version from Sun was called at the time.  SEVM used to be included free with all Sun SPARCstorage Array 110 model arrays (an external proprietary fiber-connected JBOD array capable of supporting up to 30x 2Gbyte or 4Gbyte disks!)</p>
<p>The good news is that in almost 20 years, very little has changed, so when a customer asked me to do some testing with Veritas InfoScale (as it&apos;s now called) on XtremIO X2 it was amazingly easy to get things up and running - despite it being almost 10 years since I&apos;ve used VXVM!</p>
<p>The specific purpose of the testing was to show refreshing of an XtremIO snapshot on a Linux (CentOS 7) host running VXVM.  Before we do a refresh, we need to actually take the snapshot, and make it available on the host...</p>
<h3 id="thesetup">The Setup</h3>
<p>To do this I setup a clean CentOS 7.4 host, installed Veritas InfoScale Enterprise Storage 7.3.1, and then presented 2 LUNs from an XtremIO array to the host.  As I was using VMware, the LUNs were mapped to the host as RDMs.</p>
<div class="image-div" style="width:1140px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM1.png" alt="VXVM1" loading="lazy"></p></div><p></p>
<p>Using vxdiskadm I then added these disks to a new diskgroup (&quot;proddg&quot;), created a 200GB volume (&quot;prodvol1&quot;) in that diskgroup, and mounted it up a new a new VXFS filesystem mounted under /prod1 :</p>
<div class="image-div" style="width:639px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM2.png" alt="VXVM2" loading="lazy"></p></div><p></p>
<p>In order to track different copies, I created a file with the current time/date in this filesystems :</p>
<div class="image-div" style="width:683px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM3.png" alt="VXVM3" loading="lazy"></p></div><p></p>
<h3 id="takingthesnapshot">Taking the Snapshot</h3>
<p>With all of that done, it&apos;s then back to the array to create the initial snapshot of these volumes. The initial 2 volumes were put in an XtremIO Consistency Group called &quot;Vol1cg&quot;, which I then created a repurpose copy (ie, writable snapshot) of called &quot;Vol1cg-copy1&quot; :</p>
<div class="image-div" style="width:709px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM4.png" alt="VXVM4" loading="lazy"></p></div><p></p>
<p>After presenting the new snapshot LUNs to the VMware host, I then mapped them to the same Linux guest as the original LUNs.  This adds an extra complexity to the process, as the snapshot LUNs have the exact same VXVM signature on them as the primary LUN, so as far as VXVM is concerned they are the same disks.</p>
<p>Thankfully VXVM handles this cleanly, detecting that the serial number of the disk has changed, resulting in a &quot;<a href="https://www.veritas.com/support/en_US/article.100001562.html?ref=blog.docbert.org">udid_mismatch</a>&quot;.  This is a good thing, as it causes VXVM to ignore those disks and not try and treat them as a part of the existing diskgroup.</p>
<div class="image-div" style="width:691px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM5.png" alt="VXVM5" loading="lazy"></p></div><p></p>
<p>(xtremio0_0 and _1 are the existing disks. xtremio0_2 and _3 are the snapshot LUNs)</p>
<p>If we had presented these snapshot LUNs to a different host, at this point we could have simply imported the diskgroup (optionally renaming it in the process) and started using it. However because we&apos;ve got both the original and copy LUNs on the same host, we need to do an extra step to tell Veritas exactly which LUNs we&apos;re wanting to work on.</p>
<p>To do this, we add a &quot;tag&quot; to the LUNs that we want to operate on.  The actual tag we use isn&apos;t important, as long as we&apos;re consistent - in this case I tagged the disks with &quot;devcopy&quot; :</p>
<div class="image-div" style="width:546px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM6.png" alt="VXVM6" loading="lazy"></p></div><p></p>
<p>Next we can import the copied diskgroup with a new important flags :</p>
<ul>
<li><strong>-n devdg</strong>    -  The new name for the diskgroup (so we don&apos;t have two diskgroups with the same name)</li>
<li><strong>-o useclonedev=on</strong>  -  This is the magic flag that tells VXVM that we&apos;re importing a hardware-cloned LUN, and allows it to ignore a number of things that would otherwise be an issue (eg, the diskgroup marked as already being in use)</li>
<li><strong>-o tag=devcopy</strong>  -  Defines which copy of the disks we want to using - being those that we tagged with the &quot;devcopy&quot; tag above</li>
<li><strong>-o updateid</strong> -  Tells VXVM to create new disk identifiers for the disks, so they no longer conflict with the originals.</li>
</ul>
<div class="image-div" style="width:776px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM7.png" alt="VXVM7" loading="lazy"></p></div><p></p>
<p>Note that when mounting the filesystem it did a journal replay - that&apos;s because the filesystem was still mounted when we took the snapshot.  For non-vxfs filesystems extra steps like an fsck might be required.</p>
<p>Finally we can check the file we created on the filesystem and confirm that it&apos;s an exact copy of the original :</p>
<div class="image-div" style="width:418px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM8.png" alt="VXVM8" loading="lazy"></p></div><p></p>
<p>By changing the file on the original volume we can confirm that it does NOT change on the copy - which is what we would expect given that it&apos;s a completely independent copy of the volume/diskgroup :</p>
<div class="image-div" style="width:455px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM9.png" alt="VXVM9" loading="lazy"></p></div><p></p>
<h3 id="refreshingthesnapshot">Refreshing the Snapshot</h3>
<p>Refreshing the snapshot on the array is a simple, single-step process - but before we can do that we need to make sure the host is no longer using the volumes we&apos;re going to refresh.  The refresh will change the data on these LUNs without the filesystem/volume managers knowledge, which will result in perceived data corruption of the copy if the host is still accessing it.</p>
<p>For VXVM/VXFS, that&apos;s just a matter of unmounting the filesystem and deporting the disk group.  At that point &quot;vxdisk list&quot; will show the disks as being a part of the diskgroup, but with parentheses around the diskgroup name to show that it is not imported :</p>
<div class="image-div" style="width:609px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM10.png" alt="VXVM10" loading="lazy"></p></div><p></p>
<p>Refreshing the snapshot could be done via the XtremIO GUI, but it can also be easily done via the REST API - here we are refreshing the &quot;Vol1cg-copy1&quot; consistency group from the production &quot;Vol1cg&quot; CG :</p>
<div class="image-div" style="width:835px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM11.png" alt="VXVM11" loading="lazy"></p></div><p></p>
<p>Veritas does not continually scan LUNs for changes, which means that even after the refresh, vxdisk list shows the old data for the disk names - the same as we saw before the refresh occurred.  In order to fix this, we need to tell VXVM to rescan the disks and re-read their private regions by running &quot;vxdctl -f enable&quot; :</p>
<div class="image-div" style="width:773px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM12.png" alt="VXVM12" loading="lazy"></p></div><p></p>
<p>At this stage we&apos;re at the same point we were above with the snapshot LUNs being an exact copy of the production LUNs, and thus having the same VXVM identifiers, so we can run through the exact same steps (tag, vxdg import, mount) as before to import them as a new diskgroup :</p>
<div class="image-div" style="width:765px;">
<p><img src="https://blog.docbert.org/content/images/2017/12/VXVM13.png" alt="VXVM13" loading="lazy"></p></div><p></p>
<p>As you can see, the time in the file has change to the new one - showing that the contents of the LUNs were refresh!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Using the XtremIO REST API - Part 5]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>In <a href="http://blog.docbert.org/using-the-xtremio-rest-api-part-1/">Part 1</a> of <em>Using the XtremIO REST API</em> I covered how to access the API, and how to read data from it. <a href="http://blog.docbert.org/using-the-xtremio-rest-api-part-2/">Part 2</a> covered how to create and modify objects, <a href="http://blog.docbert.org/using-the-xtremio-rest-api-part-3/">Part 3</a> covered Snapshots, <a href="http://blog.docbert.org/using-the-xtremio-rest-api-part-4/">Part 4</a> was some Best Practices. Next up is an example using the API</p>]]></description><link>https://blog.docbert.org/using-the-xtremio-rest-api-part-5/</link><guid isPermaLink="false">657fa1afcc19c60001cb059a</guid><category><![CDATA[xtremio]]></category><category><![CDATA[restapi]]></category><category><![CDATA[perl]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Fri, 10 Nov 2017 22:43:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>In <a href="http://blog.docbert.org/using-the-xtremio-rest-api-part-1/">Part 1</a> of <em>Using the XtremIO REST API</em> I covered how to access the API, and how to read data from it. <a href="http://blog.docbert.org/using-the-xtremio-rest-api-part-2/">Part 2</a> covered how to create and modify objects, <a href="http://blog.docbert.org/using-the-xtremio-rest-api-part-3/">Part 3</a> covered Snapshots, <a href="http://blog.docbert.org/using-the-xtremio-rest-api-part-4/">Part 4</a> was some Best Practices. Next up is an example using the API from Perl.</p>
<h3 id="usingtherestapifromperl">Using the REST API from Perl</h3>
<p>My language-of-choice for scripting is Perl, and whilst it&apos;s not necessarily the best language for RESP API&apos;s, there are a few modules that can make it relatively easy, including REST::Client and JSON.</p>
<p>If you&apos;re using Red Hat/CentOS/etc then the majority of the modules you need can be installed using the following :</p>
<p><code>yum install perl-libwww-perl perl-JSON perl-Crypt-SSLeay</code></p>
<p>However at least for verison 6, the REST::Client module itself isn&apos;t available in the default RHEL/CentOS repositories, but it is available in EPEL.  After configuring access to EPEL (&quot;yum install epel-release&quot; on CentOS, a little more difficult on RHEL) you can install it using :</p>
<p><code>yum install perl-REST-Client</code></p>
<p>Alternatively you can download the REST::Client module from <a href="http://search.cpan.org/~kkane/REST-Client/lib/REST/Client.pm?ref=blog.docbert.org">CPAN</a> and either compile/install it, or simply grab the lib/REST directory and put it somewhere in your Perl library path (eg, the currently directory!)</p>
<p>Next grab this relatively simple example script - <a href="https://www.docbert.org/blog/show-vols?ref=blog.docbert.org">show-vols</a> - which will query the API and show basic details of all volumes configured on an array.  You&apos;ll need to configure the details near the top of the script (username/password/XMS hostname or IP), but otherwise this script shouldn&apos;t need any changes.</p>
<p>Firstly we load in the Perl modules we&apos;re going to use.  REST::Client is obviously used to make the REST requests, whilst JSON is needed to help us parse the results returned from the API, and MIME::Base64 is used to encode the username/password into the request. Data::Dumper isn&apos;t technically required, but it can be useful when trying to see the details of what is being returned by the API.</p>
<pre><code>use REST::Client;
use MIME::Base64;
use JSON;
use Data::Dumper;
</code></pre>
<p>Next is a fairly simple function called checkerr() that can be used after each REST call to make sure the the call succeeded.  In this case all errors will be fatal and the script will immediately print an error and exit, but of course that could be modified to act differently if required.</p>
<p>The XtremIO REST API requires very minimal configuration within the REST::Client module - we simply need to set the Content-Type to be application/json, and configure the standard HTTP Authentication header (note that whilst the password appears to be sent in clear-text, all communication is over HTTPS, so it&apos;s actually fully encrypted) :</p>
<pre><code>my $client = REST::Client-&gt;new();
my $headers = {Authorization =&gt; &quot;Basic &quot;.encode_base64($username.&quot;:&quot;.$password), &quot;Content-Type&quot; =&gt; &apos;application/json&apos;};
</code></pre>
<p>Now we can make our API call.  REST::Client has a few ways of doing this, but the easiest is to just use the GET, POST, PUT and DELETE methods. So to issue a GET request to get the list of volumes we simply have :</p>
<pre><code>$client-&gt;GET(&quot;https://$xms/api/json/v2/types/volumes&quot;, $headers);
</code></pre>
<p>The output from the above command will be a block of JSON formatted code, like :</p>
<pre><code>{
    &quot;volumes&quot;: [
        {
            &quot;href&quot;: &quot;https://xtremio4.scxtremiolab.com/api/json/v2/types/volumes/344&quot;,
            &quot;name&quot;: &quot;MyVol1&quot;,
            &quot;sys-name&quot;: &quot;mycluster&quot;
        },
        {
            &quot;href&quot;: &quot;https://xtremio4.scxtremiolab.com/api/json/v2/types/volumes/345&quot;,
            &quot;name&quot;: &quot;MyVol2&quot;,
            &quot;sys-name&quot;: &quot;mycluster&quot;
        },
</code></pre>
<p>The Perl JSON module (specifically the from_json() function) can trivially parse this, returning a tree of hashes and arrays depending on the actual data.  The easiest way to see how it has actually formatted the resulting data is using the Data::Dumper module.  Using <code>print Dumper($resp)</code> gives us :</p>
<pre><code>$VAR1 = {
          &apos;volumes&apos; =&gt; [
                         {
                           &apos;href&apos; =&gt; &apos;https://xtremio4.scxtremiolab.com/api/json/v2/types/volumes/344&apos;,
                           &apos;name&apos; =&gt; &apos;MyVol1&apos;,
                           &apos;sys-name&apos; =&gt; &apos;mycluster&apos;
                         },
                         {
                           &apos;sys-name&apos; =&gt; &apos;mycluster&apos;,
                           &apos;name&apos; =&gt; &apos;MyVol2&apos;,
                           &apos;href&apos; =&gt; &apos;https://xtremio4.scxtremiolab.com/api/json/v2/types/volumes/345&apos;
                         },
[...]
</code></pre>
<p>So we have a hash (with at least one key called &quot;volumes&quot;), containing values that are an array (due to the []&apos;s displayed), with each entry in the array being a hash with at 3 entries for name (the name of the volume), sys-name (the cluster name) and the unique URI for that volume.  The URI is what will allow us to get more information for each of those volumes, so we can then walk through the array and grab just the href values for each of the volumes :</p>
<pre><code>foreach my $v (@{$resp-&gt;{volumes}}) {
	push @Volumes, $v-&gt;{href};
}
</code></pre>
<p>Next it&apos;s simply a matter of looping through each of the href values , doing a GET for the URI to get the full details for that volume, and then printing the results.</p>
<p>Of course once again we&apos;ll get back a JSON string for the results, so again it&apos;s best to use Data::Dumper to start with to learn the format of the response :</p>
<pre><code>$VAR1 = {
          &apos;content&apos; =&gt; {
                         &apos;creation-time&apos; =&gt; &apos;2017-11-06 15:30:23&apos;,
                         &apos;acc-size-of-rd&apos; =&gt; &apos;0&apos;,
                         &apos;small-rd-bw&apos; =&gt; &apos;0&apos;,
                         &apos;wr-latency&apos; =&gt; &apos;0&apos;,
[...etc...]
                         &apos;name&apos; =&gt; &apos;MyVol1&apos;,
                         &apos;acc-num-of-small-rd&apos; =&gt; &apos;0&apos;
                       }
        };
</code></pre>
<p>And from there we can find the fields we want and display them :</p>
<pre><code>print &lt;&lt; &quot;EOT&quot;
Volume name : $resp-&gt;{content}-&gt;{name}
Volume size : $resp-&gt;{content}-&gt;{&quot;vol-size&quot;}
</code></pre>
<h3 id="theeasierway">The easier way</h3>
<p>The example above gets the list of volumes and then loops through each one to get the full details.  There&apos;s actually an easier way to do this, which is to append ?full=1 to the URL.  eg, instead of requesting</p>
<pre><code>https://xms.example.com/api/json/v2/types/volumes
</code></pre>
<p>which returns the name/cluster/href for each volume and requires a subsequent call for each volume to get the actual details, we can request</p>
<pre><code>https://xms.example.com/api/json/v2/types/volumes?full=1
</code></pre>
<p>which will return all of the available data for all volumes.</p>
<p>You can even take this a step further and specify exactly which options you want returned, rather than the full list.  In the script linked above we are only interetsed in the vol-size, logical-space-in-use and creation-time properties (plus name, which is always included) so we could use the following request :</p>
<pre><code>https://xms.example.com/api/json/v2/types/volumes?full=1&amp;prop=vol-size&amp;prop=logical-space-in-use&amp;prop=creation-time
</code></pre>
<p>which will return only those fields (plus a few default ones) :</p>
<pre><code>{
    &quot;volumes&quot;: [
        {
            &quot;index&quot;: 344,
            &quot;creation-time&quot;: &quot;2017-11-06 15:30:23&quot;,
            &quot;name&quot;: &quot;MyVol1&quot;,
            &quot;vol-size&quot;: &quot;1048576&quot;,
            &quot;guid&quot;: &quot;e44cdfc10d914d7cba85de1a90af6d95&quot;,
            &quot;logical-space-in-use&quot;: &quot;0&quot;
        },
</code></pre>
<h3 id="makingchanges">Making changes</h3>
<p>You can just as readily use REST::Client to make changes to the array as well, using POST, PUT and DELETE functions.  Take a look at the <a href="https://www.docbert.org/blog/create-vol?ref=blog.docbert.org">create-vol</a> script for a simple example of using a POST call to create a new volume.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Parsing XtremIO CLI Output]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I generally don&apos;t recommend people use the XtremIO CLI for programmatic access - it&apos;s just so much easier and better to use the <a href="https://blog.docbert.org/using-the-xtremio-rest-api-part-1/">REST API</a>.</p>
<p>However there are occasions where using the CLI may make sense. Access to the CLI can be automated via SSH, and</p>]]></description><link>https://blog.docbert.org/parsing-xtremio-cli-output/</link><guid isPermaLink="false">657fa1afcc19c60001cb05a1</guid><category><![CDATA[xtremio]]></category><category><![CDATA[cli]]></category><dc:creator><![CDATA[Scott Howard]]></dc:creator><pubDate>Mon, 24 Apr 2017 00:39:22 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I generally don&apos;t recommend people use the XtremIO CLI for programmatic access - it&apos;s just so much easier and better to use the <a href="https://blog.docbert.org/using-the-xtremio-rest-api-part-1/">REST API</a>.</p>
<p>However there are occasions where using the CLI may make sense. Access to the CLI can be automated via SSH, and sending commands is relatively trivial - right up until it becomes time to parse the output from those commands.</p>
<pre><code>$ ssh scott@xms &apos;add-volume vol-name=&quot;MyVol1&quot; vol-size=&quot;1t&quot;&apos;
Added Volume MyVol1 [29]
</code></pre>
<p>The normal output from CLI commands that return data is a series of columns, which at first might appear easy to parse.  For example, if I pick one of the more simple commands as an example we see something like this :</p>
<pre><code>xmcli (admin)&gt; show-bricks
Brick-Name Index Cluster-Name Index State
X1         1     xtremio4     1     in_sys
X2         2     xtremio4     1     in_sys

</code></pre>
<p>5 columns, each of them with a single entry in them, and each what appears to be a fixed width.  The problem is that none of those assumptions are necessarily true! Looks at what happens if I rename my cluster to have a longer name containing a space :</p>
<pre><code>xmcli (admin)&gt; show-bricks
Brick-Name Index Cluster-Name      Index State
X1         1     XtremIO Cluster 4 1     in_sys
X2         2     XtremIO Cluster 4 1     in_sys
</code></pre>
<p>As the entry in the field increases, the size of the field grows to accommodate it. Although not shown here, there&apos;s an additional issue that fields where there is no value will simply show as blank, so between that and the spaces within the values we can&apos;t use field location to parse the entries.  This rules out basically every traditional way of parsing such output!</p>
<p>Thankfully there is a pattern to the output, and thus a way to parse it.  Although there may be spaces in the fields themselves, the field headings do NOT have spaces within them - and there&apos;s guaranteed always to be at least one (and frequently more) spaces between the headings.  That number of spaces can change over time as the values change - there will always be at least one, and the columns below them will always align with the first character in the heading.</p>
<p>Taking the second show-bricks output above and looking at it&apos;s header line we have :</p>
<pre><code>Brick-Name Index Cluster-Name      Index State
^          ^     ^                 ^     ^
1          12    18                36    42
</code></pre>
<p>We can then uses the values to parse the data below it into the relevant fields.</p>
<p>A relatively easy way to do this is to use the block of &quot;gawk&quot; code below.  Not easy because it&apos;s easy to understand - just easy because I&apos;ve already written it for you!</p>
<p><a href="https://blog.docbert.org/content/images/2016/06/parse-cli.txt"><div class="image-div" style="width: 681px;"><img src="https://blog.docbert.org/content/images/2016/06/gawk-1.png" alt loading="lazy"></div></a></p>
<p>The part of the command in red (including the &quot;NR&gt;1&quot;) should be left unchanged - this is the part of the code that splits the fields based on the header field lengths.</p>
<p>The part in blue is the actual action - in this case it&apos;s matching fields from the output - confirming that the field &quot;Created-From-Volume&quot; is non-zero, and that the field &quot;Volume-Name&quot; contains the substring from the variable &quot;contains&quot;. It then prints the value of the &quot;Volume-Name&quot; field if it matches.</p>
<p>The green section at the start sets the variable &quot;contains&quot; to &quot;Sol&quot;.  In this case I could have just as easily hard-coded that value, but if you&apos;re passing in shell variables it&apos;s much easier doing it as it&apos;s shown here - set a gawk variable to the value you want, and then use that within the gawk script.</p>
<p>The above obviously isn&apos;t very easy to read, so here&apos;s an easier version to read :</p>
<pre><code>gawk -v contains=&quot;$backup&quot; &apos;
  NR==1 {
    a=$0;
    while (a) {
      l=match(a, &quot;( [A-Z]|$)&quot;);
      flen=(flen l &quot; &quot;);
      n=substr(a, 1, l);
      sub(&quot; +$&quot;, &quot;&quot;, n);
      field[n]=++fn;
      a=substr(a, l+1);
    }
    FIELDWIDTHS=flen;
  }

  NR&gt;1 {
    for (i=1; i&lt;=NF; i++) {
      sub(&quot; *$&quot;, &quot;&quot;, $i);
    }
  }

  NR&gt;1 &amp;&amp; $field[&quot;Created-From-Volume&quot;] &amp;&amp; $field[&quot;Volume-Name&quot;]~$contains {
    print $field[&quot;Volume-Name&quot;]
  } &apos;
</code></pre>
<p>Note that the above REQUIRES GNU Awk (gawk).  It uses features that are not available in standard &quot;awk&quot; (specifically the FIELDWIDTHS variable, if I recall correctly).</p>
<p>Obviously this is still fairly complex, and that&apos;s even without adding in any error checking - but if you must automate things via the CLI it at least makes it possible...<br>
&#xA0;</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>