Its been two weeks since the annual food allotments were allocated to the Skeralian Operators. Having been promised an increased allotment due to the great 2017 Q4 load crisis and the heroic efforts made to make it the most productive winter ever had for the International Oneness Nubian Evocation (NubE for short), it was quite a surprise that allotments were paltry. The receptions of this was compared to singing “Let it Go” from a famous animated movie in 21st century Earth.
Discussions around allotments and how underfed the population was centered around phrases like “fuck’em. Let’em die” and “Don’t they know the entire planet is struggling” . This had led to an exodus of key clans from the planet of Sker such as the Crimson Killers, Ecstatic Sunders, Jam Butters, and Super Punks.
All of these clans are highly skilled and their loss will be felt keenly by the economy of Sker but particularly the Terra Optimus continent where most of the actual spaceships for the planet are built. Without them I fear that spaceship production will drop precipitously.
The current Corporate Terra Overlord has made several promises to provide emergency food measures, but it comes too late as trust as been broken with the Operators and various clans.
To that end I will continue to drove solutions forward, but Terra Optimus will look vastly different in the coming months.
Things I am interested in today….
- More data added to Have I been powned– https://www.troyhunt.com/ive-just-added-2844-new-data-breaches-with-80m-records-to-have-i-been-pwned/
- Google AI turned Coders into ML gods? – https://www.technologyreview.com/s/609996/googles-self-training-ai-turns-coders-into-machine-learning-masters/?utm_source=twitter.com&utm_medium=social&utm_content=2018-02-26&utm_campaign=Technology+Review
The forgotten theory of dreams that inspired Vladimir Nabokov – https://newrepublic.com/article/146906/night-vision
- I knew it… hardware degrades over time – https://blog.acolyer.org/2018/02/26/fail-slow-at-scale-evidence-of-hardware-performance-faults-in-large-production-systems/
- oh no.. machine learning for engineers – https://arxiv.org/abs/1709.02840
- security for serverless…. well we don’t need it for linux do we? – https://www.infoq.com/articles/serverless-security
- This guy was an SRE for a site supported addicts… he might know a thing or two – https://zwischenzugs.com/2017/04/04/things-i-learned-managing-site-reliability-for-some-of-the-worlds-busiest-gambling-sites/
- What I Learned Burning $13,867 on YouTube Ads for Candy Japan – https://www.candyjapan.com/behind-the-scenes/what-i-learned-advertising-on-youtube
- Ghost Dancers because I love Native American History though America should be fucking ashamed of how we subjugated well just about fucking everyone – https://s-usih.org/2018/02/ghost-dancers-past-and-present/
- Operations with AI – https://itsvit.com/blog/ai-way-business-now/
Its about Trust
So… I have heard Simon Sinek talk about this. I agreed with him. I knew it was true from past experiences, but it had been a minute. Just to confirm in case you were wondering… he was absolutely on the mark. You want to wreck trust in your company? Lay some people off. And do it with no warning, no notice, and please talk about it in the quarterly meeting with the entire company as if it is a boon for your financial profile.
I know. I get it. You are running a business. You have bottom lines, budgets, projections, promises, and agreements that must be honored. I know because I am in that room with you asking about the financials, helping with the c-level presentations, and doing the justification. Running a business is hard. I have failed at three personally and watched at least five others fail that were not mine. The statistics of those who make it and those who don’t is astounding.
But just because you can doesn’t mean you should.
Maybe you should care about trust
Is there another way?
What if you just let people attrition out? What if you started firing those who weren’t carrying their weight instead of requiring groups with no dead weight to give some percentage of their people. In the end, not only will you win good will by not wrecking the company and creating a culture of fear, but you will execute a transformation in an incremental and supportable way instead of a destructive way.
Sometimes you are in a corner and you have to do what you have to do, but if that is not the case…
Can you do it just a little different and get massively different results. Because once that trust is gone; it is so hard to get back.
A place holder to show when this was setup versus when the first post went out.
So there will come a day when you will need to calculate RCU/WCU for Dynamodb tables on AWS. This will probably come in the form of a word problem like this one…
From A Cloud Guru’s discussion group on a quiz on DynamoDB…
“You have a motion sensor which writes 600 items of data every minute. Each item consists of 5kb.
Your application uses eventually consistent reads. What should you set the read throughput to?”
Ignore the inconsistency of the first use of “writes” and let’s assume that we want reads and that the motion sensor reads 600 items of data every minute.
Here are a few simple steps:
- Is this a Read (RCU) or Write (WCU) calculation? – This matters for two reasons because:
- RCUs have a 4kb chunk size per operation
- WCUs have a 1kb chunk size per operation.
- Calculate the number of items per second – Why? Because the RCU and WCU (reads and writes) are provisioned by the second. In our example, above we take 600 items and divide it by 60 seconds (every minute). This comes out to 10 items per second are being read per our example. $IpS
- Calculate the number of actions per item needed. – Each item is 5kb and our chunk (or grab) size is 4kb per read unit. If this was a write then our chunk(or grab) size would be 1kb. This means in our example of 5kb we need two 4kb operations to read that file. One operation to grab the first 4kb and then one more to grab the remaining 1kb. So that means we need 2 actions per item. $ApI
- Multiply Items per Second times Actions per Item ($IpS * $ApI) – So in our example, above we take 10 items per second and multiply it by 2 actions per item. This tells you that for our read example you need 20 RCUs.
- If you are doing reads then you need to know if it is eventually consistent (2 reads per RCU) or strongly consistent (1 read per RCU).
- If you are using strongly consistent reads then you are done. Your answer is 20 RCU’s.
- If you are doing eventually consistent reads then you need to take the result of Step 4 and divide it by two. So in our example 20 RCUs/2 = 10 RCUs needed for our word problem.
This is mainly exam focused so if this is for certification then at present (09/2106) that is what you need to know. If this is for real throughput calculations then make sure you account for local secondary indexes which “share” RCU/WCU with the table they are indexed against. You will also need to account for Global Secondary Indexes which are basically a read only copy of a table with its own RCU/WCU.
Let me know.
Many times I have been asked how I studied for the various Amazon Web Services exams and I thought I would write a specific series of steps of how I have approached this topic.
The first thing to note is that my background is that of a systems engineer so I come from the old school UNIX and Linux worlds of HP-UX, AIX, and Solaris. I evolved into architecture, networking, and virtualization (and of course people), but that isn’t where I stayed. So Citrix/VMware-style virtualization is normal for me as well as Windows in various flavors. I have also been doing AWS either directly or indirectly (as a manager or architect) for over 5 years. I know the lingo, read the blogs, and of course, now I have taught AWS for most of 2016.
But to the chase….
Nay, if our wits run the wild-goose chase, I am done; for
thou hast more of the wild-goose in one of thy wits than, I am
sure, I have in my whole five.
Romeo And Juliet Act 2, scene 4, 67–73
- Read the Exam Guide for the Developing on AWS Certification. In particular, read the sections on Domain knowledge. Note below that section the specific areas that you must study. Focus on those. Look up any terms you don’t understand like AMI, AAA, or CIDR. This will tell you where to spend most of your time.
- Take one of the following in preferred order:
- A Cloud Guru’s Developing on AWS course – This course normally runs for $30 and involves 8 hours of videos. A great starter course. The best part of it… the quizzes. Not only the quizzes here in this course but the ones you can get below on Android (or iPhone?). Best recommendation and the one most people love. They also have ALL of the certification which I recommend. Do not move on until you can pass the final simulated quiz with a %95 or better.
- Linux Academy’s Developing on AWS course. – This course is also good with quizzes and other community tidbits. I actually like Linux Academy for the breadth of courses it offers and not just their AWS section. AS of this writing on 9/15, they do not have the DevOps Engineering exam certification so hence A Cloud Guru.
- Cloud Academy’s Developing on AWS course – This is the one course that I hear complaints about. I have never heard any complaints about A Cloud Guru or Linux Academy but hear many about Cloud Academy. Honestly, I don’t know why because they seem pretty solid, but I have to share my experience.
- Obtain the mobile app for A Cloud Guru (a.k.a Exam Certified) in Google Play or iTunes store for the Developing on AWS Exam Test.. – There is an app for Solutions Architect and Developer and I recommend the Developer mobile app to make sure you can test, test, test. Aim for 95% or higher and make sure you understand why your answer is the correct one.
- Do all of the free Intro labs for AWS at Qwiklabs and maybe even kinda sorta buy an account with them and go past the introductory courses.