• 0 Posts
  • 7 Comments
Joined 10 months ago
cake
Cake day: December 27th, 2023

help-circle
  • you’re welcome.

    what i’ld suggest… a general rule that i like to always follow is to use a test system for everything new. but that does not need to be a full separate system every time.

    lets say you have your mailbox and want to try getting new mails from it using fetchmail. first you can use uidl mechanisms to only fefch every mail once and besides that leave them all on the server, but i like it a bit more secure: create a second email adress/account at your mail providers service only for testing. thus you can do whatever you like to to test the mechanisms only without even touching your real inbox (maybe even fill it up with large emails and look how the system reacts, i once had an email account with a cheap provider that deadlocked the inboxes when full…). then when everything is as you want it, switch the account and password (or create another config file for fetchmail) and your’re done. every change (not only fetchmail things) could go tested this way before going live with the changes. filtering could be done with procmail for example, but when the mda that is called by procmail somehow exits with success when the email really isn’t delivered, then the email might get lost forever depending on the settings of course. so fiddling with new stuff always carries the risk of not fiddling correctly ;-)

    have fun !


  • Its possible to tell your mta (like postfix) to use another mta for all mails, or only some domains etc, so using a third party to play the internet facing service then getting the mails by fetchmail, storing them in a dovecot server is easy. on the sending part you could use your standard email client (i.e. thunderbird on pc or k9-mail on smartphone) to send it to your postfix instance that also sits on the server hosting your dovecot service. the mta there takes the mail and delivers it by rules which could just be using the mta of your freemailer using username/password of your account for all outgoing emails. i am doing this but the “external” mail system are my servers as well, i just don’t want emails to stay too long on VMs in the datacenter where i have no access to the physical disks in case something goes wrong.

    a raspberry pi is sufficient for such a aetup (i am using a pi4 currently but for emails only i’ld say a 3 or older would do too), adding a disk via usb makes storage huge and cheap then, i use two usb ssd’s in a raid1 for storage… that server could be only accessible through vpn if you whish, depending on your skills and needs (i mainly use ssl client certificates that are supported by k9mail and thunderbird so it fits seamless to be connected through a haproxy that authenticates these before proxying the plain connection to the pi) clients like thunderbird can offline-store all emails (configure download-or-not per imap folder) making searches easy and quick while my k9 client can search locally or on the server if needed.

    maybe adjust maximum mail size of your own mta to exactly match (or slightly less) that of the freemailer you use to prevent surprises of big but later then unsent emails.

    its possible to have a nextcloud instance on that same pi that acts as an email web mailer just in case of (i really dont need it, but i’ve set this up anyway). nextcloud is also great for syncing/backup files pictures, contacts notes todo lists and calendar of your phone (where i use davx5 opentasks and foldersync for). there are other webmailers available but installing /using nextcloud is not a too bad idea either ;-)

    i suggest also setting up some automatic offsite backup with snapshots of that pi then to cover emails and the setup and its configs ;-)


  • there was a study saying that there is not “the” best way of learning, but it is best to combine multiple ways, like with an app, by book, listening to audio only (i listened to radio stations via internet and got some exercise for free), a bit of talking, visiting a country that only speaks that language and so on. trying everything a bit in parallel.

    that is because of our brain learns better when given more different types of “connections” to learn.

    i started with duolingo (website only, not the app and only the free parts) 4 years ago and now i speak quite fluently. but i also partly read a book about grammatics, visited a spanish speaking country (more than once), viewed movies with only subtitle in my language and did lots of phone calls in spanish only.

    my advice is:

    look at free apps, whatever pleases you, take chances, listen to the sound (movies, radio), try to speak, and read easy books or go through exercise books.

    duolingo is good to keep on going while not really motivated as the shortest thing that counts are really only minutes and one can choose to do something that is already easy. this way at least continuation is kept even if pace is down for a while. and it is much easier to go on with pace when not having really stopped.


  • i am happy to have a raspberry pi setup connected to a VLAN switch, internet is behind a modem (like bridged mode) connected with ethernet to one switchport while the raspi routes everything through one tagged physical GB switchport. the setup works fine with two raspi’s and failover without tcp disconnections during an actual failover, only few seconds delay when that happens, so basically voip calls recover after seconds, streaming is not affected, while in a game a second off might be too much already, however as such hardware failures happen rarely, i am running only one of them anyway.

    for firewall i am using shorewall, while for some special routing i also use unbound dns resolver (one can easily configure static results for any record) and haproxy with sni inspection for specific https routing for the rather specialized setup i have.

    my wifi is done by an openwrt but i only use it for having separate wifis bridged to their own vlans.

    thus this setup allows for multi-zone networks at home like a wifi for visitors with daily changing passwords and another fror chromecast or home automation, each with their own rules, hardware redundancy, special tweaking, everything that runs on gnu/linux is possible including pihole, wireguard, ddns solutions, traffic statistics, traffic shaping/QOS, traffic dumps or even SSL interception if you really want to import your own CA into your phone and see what data your phones apps (those that don’t use certificate pinning) are transfering when calling home, and much more.

    however regarding ddns it sometimes feels more safe and reliable to have a somehow reserved IP that would not change. some providers offer rather cheap tunnels for this purpose. i once had a free (ipv6) tunnel at hurricane electronic (besides another one for IPv4) but now i use VMs in data centers.

    i do not see any ready product to be that flexible. however to me the best ready router system seems to be openwrt, you are not bound to a hardware vendor, get security updates longer than with any commercial product, can 1:1 copy your config to a new device even if the hardware changes and has the possibility to add packages with special features to it.

    “openwrt” is IMHO the most flexible ready solution for longtime use. same as “pfsense” is also very worth looking at and has some similarities to openwrt while beeing different.


  • i have to admit, that my point ‘just don’t do it’ in reality does not garantee to prevent any trouble. it still is possible to be sued for things someone else did.

    also one suggestion to think about:

    if the seller just sprays some random changes over a book for every sold version, one would have differences in “every” sold version to every other sold version. by blindly changing those parts to something else you could reveal which exact two/three versions you had for diffing.

    UPDATE: someone else here had the same thought a bit earlier…

    my suggestion to not do it stays the same ;-)

    it could be interesting to figure things out how they work, what could be done to prevent or circumvent such prevention, but actually doing it seems risky no matter what.


  • have a look on “snowdrop” (search together with “steganography”), its basically the opposite of what you want, but worth mentioning here. watermarks could be placed into whitespace (not limited to actual spaces or linebreaks, intentionally changed usage of paragraphs, tabs or even page boundaries could possibly be detected after scanning andeven after OCR. IMHO snowdrop uses -depending on choosen operation mode- small errors like misspelled words, commata etc but also has a mode that comes along with fine grammar and without misspelled words…

    how do you make sure that by diff’ing two versions you do cover "everything’ that has been deliberately placed into both documents but share literally the same informations?

    lets say you bought two books at two different stores with two different watermarks. if the watermark contains the date and time of the purchase and the only difference of this were the minutes because you bought them within the same hour, the remaining watermark would point to all buyers that bought exactly this book in this hour - worldwide. but still it could be “very” precise depending on all other(!) buyers, if they exist at all within that timeframe. what if the watermark includes unix epoch? then the part which is the same in both watermarks would not be bound by hours, but by seconds, 10seconds, 100seconds etc.

    and you could not know if there were other watermarks hidden that just happened to be the same for your two (three.?) purchases (same country, continent, payment method, credit card holder name, name of internet provider used during purchase, browser used etc.) it fully depends on the creator of the watermark what would be included and what not. if you happem to know all that (without any possibleexemptions) you might be on the safe side, but if not…

    my general suggestion here is:

    • if you want to be sure to not getting into trouble, then just don’t do it.
    • if that book is too expensive compared to its content, just not buying it possibly also helps the market to fix the problem.
    • save that time and instead help those who already fight for a better world.
    • search already licence free books (or such as “cc” licensed) and promote those instead, help improving free resources like openstreetmap, wiki* but do not publish licence-poisoned content there, wtite it yourself, alway.
    • write your own book and publish it free.

    just to mention… the “safe” side sometimes seems limited but maybe is actually not, if you really look at it.


  • my 2 cents just in case…:

    A raid6 is not a replacement for backup ;-) i use rdiff-backup which is easy to use, stores only one full backup and all increments are to the past while it is only possible to delete the oldest increments (afaik no “merging”) i never needed anything else. The backup should be one off-site and another one offline to be synced once in a while manually. Make complete dumps (including triggers, etc) from databases before doing the backup ;-)

    i like to have a recreateable server setup, like setting it up manually, then putting everything i did into ansilbe, try to recreate a “spare” server using ansible and the backup, test everything and you can be sure you also have “documented” your setup to a good degree.

    for hardware i do not have much assumptions about performance (until it hits me), but an always-running in-house server should better safe power (i learned this the costly way). it is possible to turn cpu’s off and run only on one cpu with only a reduced freq in times without performance needs, that could help a bit, at least it would feel good to do so while turning cpu’s on again + set higher frequency is quick and can be easily scripted.

    hard drives: make sure you buy 24/7, they are usually way more hassle-free than the consumer grades and likely “only” cost double the price. i would always place the system on SSD but always as raid1 (not raid6), while the “other” could then maybe be a magnetic one set to write-mostly.

    as i do not buy “server” hardware for my home server, i always buy the components twice when i change something, so that i would have the spare parts ready at hand when i need it. running a server for 5+ years often ends up in not beeing able to buy the same again, and then you have to first search what you want, order, test, maybe send back as it might not fit… instable memory? mainboard released smoke signs? with spare parts at hand, a matter of minutes! only thing i am missing with my consumer grade home server hardware is ecc ram :-/

    for cooling i like to use a 12cm fan and only power it with 5v (instead of the 12v it wants) so that it runs smoothly slow and nearly as silent as a passive only cooling, but heat does not build up in the summer. do not forget to clean the dust once in a while… i never had a 5v powered 12V-12cm fan that had any problems with the bearings and i think one of them ran for over a decade. i think the 12volt fans last longer with 5v, but no warranty from me ;-)

    even with headless i like to have a quick way at hand to get to a console in case of network might not be working. i once used a serial cable and my notebook, then a small monitor/keyboard, now i use pikvm and could look to my servers physical console from my mobile phone (but would need ssl client certificate and TOTP to do so) but this involves network, i know XD

    you likely want smart monitoring and once in a while run memtest.

    for servers i also like to have some monitoring that could push a message to my phone somehow for some foreseeable conditions that i would like to handle manually.

    debsums, logcheck logwatch and fail2ban are also worth looking at depending on what you want.

    also after updating packages, have a look at lsof | egrep “DEL|deleted” to see what programs need a simple restart to really use libraries that have been updated. so reboots only for newer kernels.

    ok this is more than 2 cents, maybe 5. never mind

    hope these ideas help a bit