# I am the Watcher. I am your guide through this vast new twtiverse.
# 
# Usage:
#     https://watcher.sour.is/api/plain/users              View list of users and latest twt date.
#     https://watcher.sour.is/api/plain/twt                View all twts.
#     https://watcher.sour.is/api/plain/mentions?uri=:uri  View all mentions for uri.
#     https://watcher.sour.is/api/plain/conv/:hash         View all twts for a conversation subject.
# 
# Options:
#     uri     Filter to show a specific users twts.
#     offset  Start index for quey.
#     limit   Count of items to return (going back in time).
# 
# twt range = 1 47
# self = https://watcher.sour.is/conv/bqxlviq
I did a take home software engineering test for a company recently, unfortunately I was really sick (have finally recovered) at the time 😢 I was also at the same time interviewing for an SRE position (as well as Software Engineering).

Got the results of my take-home today and whilst there was some good feedback, man the criticisms of my work were harsh. I'm strictly not allowed to share the work I did for this take-home test, and I really can only agree with the "no unit tests" piece of the feedback, I _could_ have done better there, but I was time pressured, sick and ran out of steam. I was using a lot of libraires to do the work so in the end found it difficult to actually think about a proper set of "Unit Tests". I did write one (in shell) but I guess it wasn't seen?

The other points were on my report and future work. Not detailed enough I guess? Hmmm 🤔

Am I really this bad? Does my code suck? 🤔 Have I completely lost touch with software engineering? 🤦‍♂️
I did a take home software engineering test for a company recently, unfortunately I was really sick (have finally recovered) at the time 😢 I was also at the same time interviewing for an SRE position (as well as Software Engineering).

Got the results of my take-home today and whilst there was some good feedback, man the criticisms of my work were harsh. I'm strictly not allowed to share the work I did for this take-home test, and I really can only agree with the "no unit tests" piece of the feedback, I _could_ have done better there, but I was time pressured, sick and ran out of steam. I was using a lot of libraires to do the work so in the end found it difficult to actually think about a proper set of "Unit Tests". I did write one (in shell) but I guess it wasn't seen?

The other points were on my report and future work. Not detailed enough I guess? Hmmm 🤔

Am I really this bad? Does my code suck? 🤔 Have I completely lost touch with software engineering? 🤦‍♂️
I'm also worried about the "lack of unit tests" feedback too, because I am reminded of TDD When did it all go wrong -- Where you see so many engineers do "unit tests" wrong 🤦‍♂️ -- If you don't have time to watch this (rather long) video, TL;DR:

> Unit tests test the behaviour of the system, or the API boundaries.
> Unit tests should not test any implementation details.
> They should let you refactor the implementation.

This was the original intent behind Unit Tests and AFAIK still is.
I'm also worried about the "lack of unit tests" feedback too, because I am reminded of TDD When did it all go wrong -- Where you see so many engineers do "unit tests" wrong 🤦‍♂️ -- If you don't have time to watch this (rather long) video, TL;DR:

> Unit tests test the behaviour of the system, or the API boundaries.
> Unit tests should not test any implementation details.
> They should let you refactor the implementation.

This was the original intent behind Unit Tests and AFAIK still is.
@prologic Don't let yourself get beaten down, man.
@mckinley Yeah thanks man I appreciate it! 🤗 But right now I feel like my code sucks and I've been beaten 😢
@mckinley Yeah thanks man I appreciate it! 🤗 But right now I feel like my code sucks and I've been beaten 😢
@prologic It's been a while, but your code I've seen so far didn't look too bad to me. I remember we had discussions about missing tests, but other than that, I can't recall any "oh dear, WTF" momements. Obviously, there's always something, that can be improved, nobody writes perfect code. Only close to perfect. :-) Being sick and having time pressure doesn't help writing good code either. So, don't take the harsh feedback *too* seriously. Let a week pass and have a look again, your perspective might have shifted and you possibly understand what they wanted to tell you, assuming they wanted to give you honest feedback. Buck up! :-)
@lyse Thanks bud 🤗
@lyse Thanks bud 🤗
@lyse I found this article last night as I was trying to get to sleep (still not feeling 100%, sinuses still playing up 😢):

Best Practices for Testing in Go - FOSSA

Some good tips here in by a Jessica Black. I _think_ I mostly agree with.

What do you think? 🤔 Got anything else to share along these lines? 🙏
@lyse I found this article last night as I was trying to get to sleep (still not feeling 100%, sinuses still playing up 😢):

Best Practices for Testing in Go - FOSSA

Some good tips here in by a Jessica Black. I _think_ I mostly agree with.

What do you think? 🤔 Got anything else to share along these lines? 🙏
@prologic Yup, looks good, I also agree with her.

Just a few weeks back I had basically the same idea with inventing a more generic mock implementation for our storage layer at work. Previously, we had tons of new test storage types each implementing another hardcoded behavior based on the exact input. For a start that works well and is incredibly easy, but over time it quickly becomes unmaintainable and also reading the tests is extremely hard. Why is that weird value used as an argument over here? Quite some time later one realizes: Oh, right, it will then trigger that and that.

So my approach basically boils down to the exact same thing Jessica does. To be able to set a mock function in the mocked object that will then do whatever is needed. Setting up more involved tests is now concise and readable. It's still not perfect, but a large improvement even so. My implementation goes a bit further than Jessica's and falls back to the real functionality, if not overridden explicitly. This has the advantage to just throw together a bunch of tests without mocking *everything*, since there are often a lot of steps needed to build the actual scenario.

In Kraftwerk v2 I extended the mock storage to be able to be initialized even more easily with this automatic init(). At work where this mock.Storage type *inherits* (and not just contains a) memory.Storage forces us to also explicitly create a memory storage for each and every mock.Storage{Storage: memory.NewStorage(…), …}. One day, if I have some time, I'll refactor the day-job code and apply this simplification, too. Ideally, Go would allow me to write some constructor thingy where I could set up and propagate initial data to the backing memory implementation. Then there's no chance of forgetting a call to the s.init() in a new function. But that's the best I've come up with so far. I just want to make it as easy as possible to write tests.

So that was very cool for me to see her writing it down as well. It seems my idea the other day was not completely silly. :-) Haven't seen it anywhere else up until now.

This test subject fits perfectly. Just before quitting time two work mates and I discussed about tests. And one rule we made up now is to prefer table tests, when possible. This helps writing and maintaining better tests. I remember back in the Java days when there were different parameterized test frameworks, as they called it. They worked similarly, but in contrast to Go's flexibility of the builtin table tests, it doesn't really compare. Arguably, it's still heaps of code in Go, but creating parameterized tests in Java was always much more hassle in my opinion. Oh, I need this special runner now, which is the correct one? What was the annotation called again? Oh hang on, now these other tests won't work anymore with this new test runner, I have to move stuff to new test classes. That's why I only rarely used them. With Go, it's a real first-class citizen and not an afterthought and that positively shows. (Not sure if parameterized tests improved after Java 8.)

One thing that the article doesn't mention, or I already forgot after writing this wall of text. ;-) Thinking about edge cases. That's super important and often they're missed in my experience. Here TDD might be a good approach to the problem. Come up with possible cornor cases up front, write some tests and then implement the logic. At least for bug fixes this is a great way. There are limitations of course, if you don't know in advance how your going to design the API, TDD won't work in practice. We just had exactly this sitation this week at work. Even with only one fairly simple new function in the end. We threw away four (!) designs and did it quite differently for the final result. If we had strictly followed TDD here, we would have rewritten all our tests a couple of times. And that would have been super annoying and thus demotivating (well, we had to completely rework them once). Granted, that doesn't happen thiiiis often, but it still occurs every now and then.

One last final thing: I very much enjoy looking at code coverage reports and see a lot of green there. This motivates me writing more tests and thinking of ways I could test that last little thing here as well. And if that turns out to be impossible with reasonable effort, you know that you probably need to refactor things.
I forgot one thing. Testing for errors is also an important part that is often overlooked in my experience. Another rule we talked about two hours ago is to test error messages preferably exactly. Then there's a dim chance of spotting a final garbage error message. When seen in total, one might tell if it is understandable and all important context information are present. Lots of error messages I've come across are completely useless, I've no idea what's going on or what I have done incorrectly. A frightening lot of times messages don't even make any sense at all. Not a single bit. Just random words put together. The really bad ones you don't understand even if you look at the code and exactly know what the situation is but still cannot decipher the message with all that knowledge on top. It happens more often that I would think.

No doubt, writing good error messages is an art in itself and often takes a minute or two (or even more) to come up with something short and still precise. But in the end it will always pay off to provide some quality message. Same with logging in general, of course. But errors returned to somebody else are more important than internal logs.

In a previous commercial software project the customer wanted to have a complete catalog of all info log messages and above. An additional description with more context had to be provided what that log ID and message meant. I think with warning level and above both a solution and verification was required on how to fix it and then validate that it actually worked. Error and fatal included even more stuff I can't remember anymore.

For us developers that was incredibly annoying, but when we then finally also had to operate that software, this was absolutely awesome to have! Man, did I suddenly understand what all this effort was for. It immediately paid off. There was one guy inhouse just analyzing logs from our different systems all day long and trying to categorize and correlate things. Even with the log message catalog he often had some detail questions to use developers. Can't imagine what would have happend without that catalog.

That experience was truely an eye-opener for me. I can also see it with my current work mates. Only if you had been forced to analyze yourself with nothing else but the logs what was going on or went wrong, you will appreciate and also write good messages yourself. If you haven't been in that situation before, there's basically no way you'll be in a position to write decent logs. And even then you realize that important context is missing when you have to analyze something. :-)

I'm on the fence with testing log entries. In a previous project we quite often did. But there were also hard requirements to produce certain logs, so then it made sense. Usually I don't unless there are some weird circumstances. Can't think of any such situation off the top of my head right now, though.
@lyse Yeah completely agree on good error handling and messages. I think I've personally done a poor job on this with yarnd for example and probably ended up with logging and errors that were a bit too verbose and maybe too wrapped and redundant.
@lyse Yeah completely agree on good error handling and messages. I think I've personally done a poor job on this with yarnd for example and probably ended up with logging and errors that were a bit too verbose and maybe too wrapped and redundant.
Speaking of good error handling. Have you or your mates/colleagues thought much about good/best practices around this? Besides the fact that it's a bit of an "art form" -- So is good Unit Testing really and even designing good interfaces.

For example, how much context to provide? Should you always wrap the underlying error? Is it always useful to bubble errors up the stack? I'm not even sure myself, but one thing that does come to mind is to avoid repeating the same error as they bubble up the stack. I don't know how to define this clearly though in a set of examples and best practices like Jessica Black has done so eloquently in her article (yet)
Speaking of good error handling. Have you or your mates/colleagues thought much about good/best practices around this? Besides the fact that it's a bit of an "art form" -- So is good Unit Testing really and even designing good interfaces.

For example, how much context to provide? Should you always wrap the underlying error? Is it always useful to bubble errors up the stack? I'm not even sure myself, but one thing that does come to mind is to avoid repeating the same error as they bubble up the stack. I don't know how to define this clearly though in a set of examples and best practices like Jessica Black has done so eloquently in her article (yet)
@prologic Error handling especially in Go is very tricky I think. Even though the idea is simple, it's fairly hard to actually implement and use in a meaningful way in my opinion. All this error wrapping or the lack of it and checking whether some specific error occurred is a mess. errors.As(…) just doesn't feel natural. errors.Is(…) only just. I mainly avoided it. Yesterday evening I actually researched a bit about that and found this article on errors with Go 1.13. It shed a little bit of light, but I still have a long way to go, I reckon.

We tried several things but haven't found the holy grail. Currently, we have a mix of different styles, but nothing feels really right. And having plenty of different approaches also doesn't help, that's right. I agree, error messages often end up getting wrapped way too much with useless information. We haven't found a solution yet. We just noticed that it kind of depends on the exact circumstances, sometimes the caller should add more information, sometimes it's better if the callee already includes what it was supposed to do.

To experiment and get a feel for yesterday's research results I tried myself on the combined log parser and how to signal three different errors. I'm not happy with it. Any feedback is highly appreciated. The idea is to let the caller check (not implemented yet) whether a specific error occurred. That means I have to define some dedicated errors upfront (ErrInvalidFormat, ErrInvalidStatusCode, ErrInvalidSentBytes) that can be used in the err == ErrInvalidFormat or probably more correct errors.Is(err, ErrInvalidFormat) check at the caller.

All three errors define separate error categories and are created using errors.New(…). But for the invalid status code and invalid sent bytes cases I want to include more detail, the actual invalid number that is. Since these errors are already predefined, I cannot add this dynamic information to them. So I would need to wrap them à la fmt.Errorf("invalid sent bytes '%s': %w", sentBytes, ErrInvalidSentBytes"). Yet, the ErrInvalidSentBytes is wrapped and can be asserted later on using errors.Is(err, ErrInvalidSentBytes), but the big problem is that the message is repeated. I don't want that!

Having a Python and Java background, exception hierarchies are a well understood concept I'm trying to use here. While typing this long message it occurs to me that this is probably the issue here. Anyways, I thought, I just create a ParseError type, that can hold a custom message and some causing error (one of the three ErrInvalid* above). The custom message is then returned at `Error()` and the wrapped cause will be matched in `Is(…)`. I then just return a ParseError{fmt.Sprintf("invalid sent bytes '%s'", sentBytes), ErrInvalidSentBytes}, but that looks super weird.

I probably need to scrap the "parent error" ParseError and make all three "suberrors" three dedicated error types implementing Error() string methods where I create a useful error messages. Then the caller probably could just errors.Is(err, InvalidSentBytesError{}). But creating an instance of the InvalidSentBytesError type only to check for such an error category just does feel wrong to me. However, it might be the way to do this. I don't know. To be tried. Opinions, anyone? Implementing a whole new type is some effort, that I want to avoid.

Alternatively just one ParseError containing an error kind enumeration for InvalidFormat and friends could be used. Also seen that pattern before. But that would then require the much more verbose var parseError ParseError; if errors.As(err, &parseError) && parseError.Kind == InvalidSentBytes { … } or something like that. Far from elegant in my eyes.
@prologic In one project a bunch of work mates strongly advocated a new to everyone (them included) idea. Any incoming request must produce exactly one log line. Not more and not less. Exactly one. That way the log does not get spammed with lots of useless information most of the time and one immediately sees what went wrong, if at all. In the beginning I thought this is completely rediculous, because I had never seen this anywhere and thus just couldn't imagine that this will work at all.

The technical details to only produce one log per request were sorted out fairly quickly with a customer logger, that just replaces the last message with the newly logged one and finally at response end actually logs it. When a Java component was completely rewritten in Go they tried it out and I was very surprised that it worked that well for the analysis. I basically never missed any other surrounding logs that would have been produced in the old log flooding style. Over time a few things such as structured context fields were added that turned out to be useful to have for error analysis. It's been a couple of years, but I think we rewrote that logger a bunch of times to optimize even further and try out new API ideas we had.

I remember it as a surprisingly successful experiment. In my current project I also once tried to tell my work mates about that, but – just like me when I heard about it in the first month – they weren't ready for it. :-) To be fair, we have a slightly different situation now than in the other project.
We've barreled past the microblog line and flew straight over the e-mail chain line. This is just social blogging.
@mckinley @lyse Hahahah 😂 Honestly it doesn't mayert too much 🤣 I always enjoy reading @lyse's thought like this 👌
@mckinley @lyse Hahahah 😂 Honestly it doesn't mayert too much 🤣 I always enjoy reading @lyse's thought like this 👌
@lyse I think you probably need to drop the motion of error sub types as Go doesn't have inheritance.
@lyse I think you probably need to drop the motion of error sub types as Go doesn't have inheritance.
@lyse I do like the idea of only logging one log line per incoming request especially for web service or APIs 👌
@lyse I do like the idea of only logging one log line per incoming request especially for web service or APIs 👌
@lyse I _think_ the most interesting thing about errors.New() is just how stupidly simple it really is:

o
// New returns an error that formats as the given text.
// Each call to New returns a distinct error value even if the text is identical.
func New(text string) error {
	return &errorString{text}
}

// errorString is a trivial implementation of error.
type errorString struct {
	s string
}

func (e *errorString) Error() string {
	return e.s
}


That's it! 😂
@lyse I _think_ the most interesting thing about errors.New() is just how stupidly simple it really is:

o
// New returns an error that formats as the given text.
// Each call to New returns a distinct error value even if the text is identical.
func New(text string) error {
	return &errorString{text}
}

// errorString is a trivial implementation of error.
type errorString struct {
	s string
}

func (e *errorString) Error() string {
	return e.s
}


That's it! 😂
@lyse I _think_ the most interesting thing about errors.New() is just how stupidly simple it really is:

o
// New returns an error that formats as the given text.
// Each call to New returns a distinct error value even if the text is identical.
func New(text string) error {
\treturn &errorString{text}
}

// errorString is a trivial implementation of error.
type errorString struct {
\ts string
}

func (e *errorString) Error() string {
\treturn e.s
}


That's it! 😂
It makes me think that _really_ we should just be defining our own error types all the time 🤔 Maybe...
It makes me think that _really_ we should just be defining our own error types all the time 🤔 Maybe...
@prologic Right, it's not inheritance, but embedding. The two standard errors are cool. But always doing basically the same for all our own errors with probably also implementing Unwrap(), Is(…) and As(…) is sooooooo much work. Unnecessary work, there must be a better way. Sleeping on this twice, the main issue is probably not carefully thinking about the errors in my APIs. Which kind of errors should be distinguishable by the caller. Does it even make sense to differentiate between them? Can the caller react differently depending on what went wrong? This also depends on the caller, of course. In my combinedlog.parseLine(…) example it's basically stupid. One generic error is enough.

Logging only a single line is often very useful. But apart from access logs in web servers I can't remember seen this implemented anywhere in the wild.
@lyse I think that's spot on. Deliberate and careful design of errors is probably just as important as good interfaces 👌
@lyse I think that's spot on. Deliberate and careful design of errors is probably just as important as good interfaces 👌
@prologic Exactly, errors are part of the interfaces. The only problem is that I cannot formally express it in detail so that the Go compiler would give me any hints or step on my foot. Java's checked exceptions are a mess, too. So, no idea how to solve that in an ideal world.
With respect to logging.. oh man.. it really depends on the environment you are working in.. development? log everything! and use a jeager open trace for the super gnarly places. So you can see whats going on while building. But, for production? metrics are king. I don't want to sift through thousands of lines but have a measure that can tell me the health of the service.
With respect to logging.. oh man.. it really depends on the environment you are working in.. development? log everything! and use a jeager open trace for the super gnarly places. So you can see whats going on while building. But, for production? metrics are king. I don't want to sift through thousands of lines but have a measure that can tell me the health of the service.
@xuu +1 on metric driven development (MDD?) Very important to have in a production system, service, whatever (codebase). I'm not going to look at your logs and try to decipher them, I want to see wtf happened at a specific point, then go hunt down logs around that specific time interval.
@xuu +1 on metric driven development (MDD?) Very important to have in a production system, service, whatever (codebase). I'm not going to look at your logs and try to decipher them, I want to see wtf happened at a specific point, then go hunt down logs around that specific time interval.
@xuu Yup, metrics are a different story. Need to take a deeper look at them some day. With logs I meant analyzing why requests could not be processed. It's not necessarily the actual service that has a problem.
wow 😳 this has to be one of our longest yarns in a while 😳 @lyse you might be interested in my observe package
well this has to be one of our longest threats in a while 😳 @lyse you might be interested in my observe package
wow 😳 this has to be one of our longest yarns in a while 😳 @lyse you might be interested in my observe package
@prologic No doubt about that. Typing them up took me pretty much exactly an hour each. ;-) Thanks, bookmarked!
@lyse Speaking of which... I'm curious how you would have implemented this little demo:

GPT3 Demo

Source: https://git.mills.io/prologic/gpt

You can tell I just "whacked" it together pretty quickly -- mostly imperative, procedural style.
@lyse Speaking of which... I'm curious how you would have implemented this little demo:

GPT3 Demo

Source: https://git.mills.io/prologic/gpt

You can tell I just "whacked" it together pretty quickly -- mostly imperative, procedural style.