0 Comments

We at teamaton and discoverize have used PivotalTracker from 2012 to 2024 – so for almost the whole time since the founding of our company. Since last year it is clear that PivotalTracker will reach its end-of-live in 2025: https://www.pivotaltracker.com/blog/2024-09-18-end-of-life

Already in 2025 we are using ClickUp as our planning tool. There were a few reasons for the change:

  • using the same tool as the other people in our company (not just developers) – therefore more transparency and collaboration
  • more flexibility with different lists and views (focus on current sprint, having milestones in the same tool)
  • subtasks can have same properties as main tasks (description, attachments, …)
  • better text formatting

We still used PivotalTracker in 2024 to search for older user stories with valuable content (for instance bugfixes when encountering the same bug again). Therefore we now needed a replacement where we can still search our vast vault of old USs/Bugs wisdom, even though these searches are getting less frequent.

I first thought about implementing this kind of search myself (importing the Pivotal exports into a new database). This seemed funky (we do funky days once a month), but would have also taken quite some time, and would have had a few drawbacks:

  • crude display of user stories without attachments
  • no editing or deleting of user stories

I thought about importing our Pivotal data to one of the big tools (ClickUp, JIRA, TargetProcess, Basecamp) but that also disadvantages:

  • difficult import of Pivotal data (possibly without attachments)
  • too much work to set up (more whistles and bells)
  • not a good search function of the data
  • pricy

Then I came across a few PivotalTracker replacement tools in the making. A good list of alternatives has been compiled here: https://bye-tracker.net/. Advantages of these tools:

  • same look and feel like PivotalTracker
  • simple setup and usability
  • some even as self-hosting option (see cm42-central)

I checked out all the tools in the list. Most are still in development, some in beta. They are trying to attract customers which are switching from Pivotal. Most are still lacking features, for instance attachments.

Finally I decided on BARD Tracker – id has everything we need:

The import from PivotalTracker is seamless (via API token): with attachments, comments, formatting, tasks, milestones, …

image


The pricing right now is fair, even free for solo accounts (similar to Pivotal):

image


The look and feel is very similar to PivotalTracker:

image

All in all it took a while to sift through the alternatives, but in the end I have a good feeling with choosing BARD, even if it is for now only for searching through old user stories.

0 Comments

(This is a blog post in progress. It will change over time. For now it is just an amalgamation of my current thoughts and experiences. I have not yet bothered to read dedicated literature regarding backups. I probably forgot some things I already know. You will probably find a better write-up on backups somewhere else. If you do, please send me your source :). I mean it.)


Why backups?

There are a myriad of reasons why you should back up your data:

  • disk failure
    • disk sectors not readable anymore
    • whole disk not readable due to driver error or similar
    • external physical force destroyed disk
  • accidental erasure
  • accidental change of data
  • loss of device with data
    • fire or other external hazards
    • theft
    • accidentally leaving device somewhere unrecoverable
  • loss of access to a service where your data resides
    • service looks you out of your account because of some dispute or because service is down for good
    • no access to password, email account, etc. to access service

It all depends on your personal preferences and how tragic a loss of data would be for you. This might be memories (photos, diaries), time investment (documents, programming code, access to password manager), …


What to back up?

In essence the answer is: everything you want to keep and cannot already restore via some method from somewhere else. Here are a few pointers:

  • documents
  • programming code
  • databases
  • hosted web pages
  • invoices
  • emails
  • calendars
  • bookmarks
  • software / games downloads (if bought)
  • software configurations (it can take quite some time to re-configure software so it behaves like with an old installation)
  • pictures
  • videos
  • audio files
  • data to access your password manager
  • license keys (for software)


When to back up?

As often as possible, but as rarely so that the process does not hinder your "normal" activities. It is a tradeoff between how much data are you willing to lose vs. how much time, money and energy do you want to invest in backups.


Where to back up to?

It depends on your personal taste for safety and data access. You can backup your data to an external storage (for instance an external SSD) – external referring to outside your device(s). You could keep that external storage at home, but it could become subject to a fire or to theft along with your devices. So keeping the backup storage in a separate location from the devices is usually good practice, for instance at a friends house.

You can backup your data to the cloud / personal server. Be sure to check your access to the external service / server regularly.


How to back up and restore?

This depends on what you want to achieve. Here are a few scenarios:

Disk failure:

  • restore last backup image of old disk on new disk
  • restore data from last backup on new disk (possibly install operating system and software anew)

Accidentally deleted / overwritten data since last backup:

  • restore single files from last backup
  • revert files to a previous state (if using repository-like backup)

Accidentally deleted / overwritten data before last backup:

  • restore files from an older backups than the accident
  • restore files from incremental backups from the time before accident happened
  • revert files to a state before the accident (if using repository-like backup)

Loss of access to service with data:

  • backup data via an export option in the service (choose the appropriate contents and format)
  • find a service (or local software) where you can import the backup you made from the service to which you lost access

Loss of device with data:

  • get new / used device, install operating system and software anew, restore data from last backup

Backup Tools:

Automate as much as you can – if not you will more prone to procrastinate your backups. There are many backup tools out there.

disk image: Clonezilla

data backup from one disk to another (external) disk: SyncBack

upload backups to AWS: JungleDisk (now CyberFortress)

Do not forget to backup your backup configurations if you want to re-setup those services faster next time.

Restoring:

Check intermittently that you can restore the data you backed up. It can be very frustrating if you rely on a backup and it is not restorable or does not contain the data you expect.

0 Comments

Dieses Jahr nahm ich zum ersten Mal (mit meinen Kollegen) an der BASTA teil. Remote, weil ich dafür nicht nach Mainz fahren wollte. Und auch nur zwei Tage: ein Session-Tag (Dienstag) und ein Workshop-Tag (Freitag). Hier ein paar Notizen zu Dienstag…

Keynote: Zurück in die Zukunft: AI-driven Software Development

Präsentation von Jörg Neumann und Neno Loje (Link zu Video auf entwickler.de)

Die Keynote hat mir erstaunlich gut gefallen. Wahrscheinlich, weil ich bisher noch keine Künstliche Intelligenz beim Arbeiten genutzt habe, das Thema spannend ist, sich viel verändert, und ich es sicherlich in Zukunft einsetzen werde.

allgemein: Codegenerierung via ChatGPT, AI auf eigenen Daten trainieren: https://meetcody.ai/

Integration von KI in Entwicklung: Cody, Github Copilot (Plugins in VS Code, VS, JetBrains IDE)

  • Code schreiben lassen (anhand von Kommentaren)
  • Code analysieren lassen
  • Code anpassen / verbessern lassen
  • UnitTests generieren lassen


Azure: eigenes ChatGPT aufsetzen
theforgeai.com: mehrere KIs miteinander kombinieren und Abläufe definieren (verschiedene Rollen - wie im agile team)

Fazit: Ich denke, dass KI uns helfen kann: Wir müssten Szenarien identifizieren, wo KI uns helfen kann, und es dann dort ausprobieren. Daten- und Code-Sicherheit muss natürlich gegeben sein. Auch das Know-How um die Anpassungen / Erstellungen der KI zu analysieren, zu verbessern.
Wir sind (zu) wenige Leute in unserer Firma: KI kann uns Arbeit abnehmen. Dadurch können wir uns auf andere wichtige Sachen konzentrieren.

Sessions

1. Backends ausfallsicher gestalten

Präsentation von Patrick Schnell

Definition Ausfallsicherheit

Event-Driven-Design

2023-09-26_11h20_30

Module, die über http kommunizieren, entspricht nicht EDD

Event-Hub / Message-Queue

Beispiele: Redis (einfach), RabbitMQ, …

Stateful vs. Stateless: Vor- und Nachteile

Fazit: Nur an der Oberfläche gekratzt. Nicht viel was ich da mitnehmen konnte, oder was wir da nutzen können. Aber gut sich dieses Konzept mal wieder zu vergegenwärtigen.


2. Simple Ways to Make Webhook Security Better

Präsentation von Scott McAllister

webhooks.fyi: open source resource for webhook security

Webhook Provider – Webhook Message

why webhooks:

  • simple protocol: http
  • simple payload: JSON, XML
  • tech stack agnostic
  • share state between systems
  • easy to test and mock

security issues: (listener does not know when a message will come through)

  • interception
  • impersonation
  • modification / manipulation
  • replay attacks

security measures:

  • One Time Verification (example: Smartsheet: verifies that subscriber has control over their domain)
  • Shared Secret
  • Hash-based Message Authentication (HMAC)
  • Asymmetric Keys
  • Mutual TLS Authentication
  • Dataless Notifications (listener only gets IDs, listener than has to make API call with authentication)
  • Replay Prevention (concatenate timestamp to payload with timestamp validation)

TODOs for Providers: https, document all the things (show payload example, demonstrate verification, show end-to-end process)

TODOs for Listeners: check for hash, signature

Fazit: Ein guter Überblick über die potentiellen Schwachstellen. Gut dargestellt anhand eines github Webhooks. Da wir potentiell auch Webhooks bereitstellen (derzeit via "Dataless Notifications"), ist das gut im Hinterkopf zu behalten.


3. Leveraging Generative AI, Chatbots and Personalization to Improve Your Digital Experience

Präsentation von Petar Grigorov

What is the best way for humanity to live / survive on earth: empathy, kindness, sharing stories

AI in CMS

AI in Personalization

Fazit: Etwas langatmiger Vortrag, der zum Punkt zu kommen schien.


4. Agiles Recruitment und enttäuschte Erwartungshaltungen

Präsentation von Candide Bekono

Agiles Vorstellungsgespräch: verschiedene Personen im Einstellungsteam haben (Beurteilung aus unterschiedlichen Perspektiven), Authentizität

Feedback von Kandidaten und Einstellungsteam einholen: nach Gesprächen, warum möchte Kandidat nicht zum Unternehmen

Kandidatenzentrierter Ansatz: Anpassung der Rekrutierungserfahrung an Bedürfnisse, Präferenzen, Erwartungen der Kandidaten

Pipeline Management: Pflege zu potentiellen / ehemaligen Kandidatinnen (aufwendig)

Selektion: Definition wesentlicher Kriterien und die messen

warum agile Rekrutierungsprozesse: schneller lernen was funktioniert, ausprobieren (Anpassung Stellenanzeige), anpassen des Prozesses (requirements, sourcing, …)

2023-09-26_16h27_15

Reflektierung der Kandidaten: Job Fit, Unternehmenskultur

Erwartungen der Arbeitgeber:

  • Fähigkeiten, Erfahrungen
  • Cultural Fit
  • Kommunikation, Reaktionszeit
  • Interesse und Begeisterung der Kandidaten
  • Langfristiges Engagement

Fazit: Beim Einstellungsprozess kürzere Feedbackzyklen anstreben, Dinge ausprobieren, auswerten, anpassen (agile). Präsentator betonte auch immer wieder, dass das Auswerten nicht vernachlässigt werden darf (aber oft wird – wie bei uns).

5. Done oder nicht done? Strategien für auslieferfähige Inkremente in jedem Sprint

Präsentation von Thomas Schissler

warum Done-Inkremente: Unsicherheiten minimieren, Blindflugzeiten reduzieren

2023-09-26_17h15_57

-----

Bauen wir das, was den größten Mehrwert liefert?

Annahmen überprüfen

Feedback – nicht nur von Kunden (technisches Feedback von Entwicklern)

-----

Passt unsere Qualität?

Missverständnisse früh aufdecken

Überraschungen vermeiden

-----

Wie können wir unser Vorgehen verbessern?

Indikatoren für Verbesserungen

Ausprobieren, Lernen, Verbessern

-----


Vorgehen:

  • Durchstich definieren
  • Abnahmekriterien definieren
  • Sonderfälle erstmal weglassen

Swarming: alle Entwickler an einem Projekt, statt an verschiedenen – damit zumindest einige Projekte innerhalb des Sprints fertig sind

Nebeneffekte von Swarming: daily scrum, sprint planning nicht mehr aneinander vorbei / nebeneinander (mehr Interesse da, weil alle am selben Projekt arbeiten)

Continuous Integration: Branches sollten eigentlich nicht alt werden – da möglichst innerhalb eines Sprints in development reingemergt (siehe auch feature toggles statt branches)

Continuous Testing: nicht erst nach dem Entwickeln testen, sondern schon parallel

Leidenschaft für das Produkt kann durch Done-Inkremente gesteigert werden

Definition of Done: einstellbar durch Developer, immer wieder anpassen (für nächsten Sprint) um bestehende Probleme (nicht auslieferbar, ausbremsende Dinge) zu vermeiden

Fazit: Ein mitreißender Vortrag (vor allem wegen des Vortragenden) über Themen, die wir auch mehr in unseren Alltag integrieren können.

0 Comments

the task:

Set a recurring reminder in the meetings-channel so that every week a different person (rotating through all the people in the company) has the duty of being meeting-master – with the name of the person who is meeting master that week in the message.

the obstacle:

Setting recurring reminders in Slack via the /remind is a pain. (I know the task is possible to set up with the vanilla /remind command, but it always takes me quite a while to figure out the right syntax after not using the command for a year.)

the occasion:

Every two weeks everyone at discoverize has the opportunity to use a few hours for somthing “funky”.

the path + the solution:

At first I thought I would program a .NET Core application which sends messages to Slack at the appriopriate times with the appropriate person as meeting-master. Then my toughts wandered to background tasks (which right now we also use in a different service). And I found Hangfire which would persist the tasks for me, making my life easier – on the downside of using a third-party app.

I already started to look into our Slack apps to create a new one (to follow this tutorial), but then realized that there probably already are Slack apps out there doing a better job of helping to create recurring reminders. And since my time was already dwindeling away, I just searched and compared apps. I landed on RemindUs, installed it to our Slack and started using it. It seems to do the job well enough. Task accomplished.

update:

After testing out RemindUs it did not suit our use case well. It did not convert a @user statement in a message to an alert to that user. Furthermore the layout and text was that of a reminder (of course) which felt weird. We just want a message to appear repeatedly as if it was just entered by a bot/user.

I researched other Slack Apps, but only paid ones did the thing I needed. And for this simple usage I do not want to pay a monthly amount of 6 Dollars or more.

So I reverted to vanilla Slack and the /remind command. Here is the usage in our use case:

/remind #_meetings "@anton - you are meeting master this week" on 08.05.2023 every five weeks

And then setting up this reminder for every team member in meeting master rotation one week apart.