TL;DR
Caching and Offline Support
One of the most effective strategies for optimizing apps for slow networks is implementing caching and offline support. By caching data when on a faster network, such as Wi-Fi, apps can reduce the need to download data over slower connections like 2G [1:1]. Offline-first architecture, where all reads and writes happen against a local database with background syncing, can significantly improve user experience by reducing reliance on constant connectivity
[2:5].
Optimizing Data Transfer
Reducing the amount of data transferred is crucial for performance on slow networks. This can be achieved by using efficient APIs that avoid sending unnecessary data and by compressing or downgrading media quality [1:2],
[1:7]. Consider using libraries that handle image caching and provide different image sizes based on network conditions
[1:2]. Additionally, employing low bandwidth headers can help adjust response sizes dynamically
[2:2].
Testing Under Low-Bandwidth Conditions
Testing your app in environments that simulate low-bandwidth conditions can help identify potential performance issues. Some developers use tools like the Network Link Conditioner to emulate poor network conditions during testing [2:3], while others set up separate networks that mimic 2G speeds
[2:4]. This approach allows developers to add features such as retry mechanisms, placeholder images, and reduced data sizes to enhance performance
[2:4].
Handling Failed Requests Gracefully
For apps that require network connectivity, it's important to handle failed requests gracefully. Implementing better observability to understand how prevalent connectivity issues are can help address user complaints [2:1]. Providing feedback to users when a request fails and allowing them to queue actions for later processing when connectivity improves can also enhance user satisfaction
[2:6].
Considerations Beyond the Discussions
Beyond these technical strategies, consider the overall design of your app. Simplifying the user interface and minimizing the number of network-dependent features can make an app more resilient to slow networks. Additionally, educating users about the benefits of downloading content while on faster networks can help manage expectations and improve their experience with your app.
Inflation is up. Everything costs more. Fewer and fewer people can afford broadband internet and unlimited high speed data.
In my hometown, many people are on throttled "2G" speeds, like 128kbps, after they exhaust their 1-2GB high speed data. These are lifeline plans or the cheapest MVNO plans less than $10/month.
Some apps runs really slow on 2G network speed like airbnb, amazon, one bank's app, tinder
Some apps run just as smooth, like email, google maps (if offline maps are updated overnight), another bank's app.
What are the tricks in developing apps that run consistently smooth on 2G network speed? Why don't every app developer make apps compatible with low bandwidth?
most likely load data while on wi-fi and then use cached data as much as possible.
Use a library like glide that can cache images, and make sure you're using those caches, also consider using API with different image sizes and if your app bandwidth is taking too long, then downgrade image quality.
There is a lot of 'little' things like this that you can do. I'd actually look into how to make efficient REST APIs, and that might help you give you guidance on where to beat spend your time making improvements.
And their API is probably pretty clean. Don't send out or request useless data. Skip media or severely reduce quality. Cache. Reduce analytics. Don't load ads.
Well let's say you need to show data from online db on server in your app.
You can
a) load all the data, send all the data, load all the data, etc - anytime you start app, press button, refresh. This is simple approach that always needing connection, sort of like browsing web, all the data are online. Easy and cheap to maintain. Good for food delivery apps. Can't have last week business lunch meals showing up. Can't order offline anyway.
b) load data first time you need them, keep local copy of data, update only differences in data or missing parts, update only data that changed on server when they change, delete only for long unused local data. Works offline too if data were loaded at least once before. This is implemented in e.g. Firebase real time db. Good for maps etc.
Why not everyone uses b) in their apps ?
It's not always best option. Simplest solution that works is usually the way to go.
It's much easier to maintain app on multiple platforms that is just showing loaded data, then app that has to maintain logic for local cashed or rely on third party libraries that might mess things up in next update.
The reason why app devs don't make more apps designed to run on obsolete hardware is because that hardware is obsolete.
If you're living in a developing nation where internet service is quite poor and expensive, that's an issue between you and your service providers. I guarantee you, as an app dev, I never think to myself, "You know, I could make an app that communicates as required on common hardware, or I can go back and design everything to work specifically for obsolete tech so people in poor countries can avoid feeling left out."
Why would I do that? The people who have the least amount of money to pay me for my services want me to retrofit my app with shit tech just so they don't feel bad about the service they get? Please.
Nobody is thinking about limiting themselves to avoid leaving a stark minority behind.
Why don't every app developer make apps compatible with low bandwidth?
Because writing apps to work with that kind of bandwidth is what we did when that kind of bandwidth is all we had to work with. Now we have much better bandwidth, so we've abandoned the old, slower ways because they no longer serve a purpose.
Don't blame app devs because you've got shitty internet providers.
so people in poor countries can avoid feeling left out
you mean Americans? 2G bandwidth after the first 1Gb is what many Americans can barely afford.
Not every state in the union is exactly stunning the world with its business and social innovation.
Coming here asking why developers aren't supporting obsolete tech instead of addressing your angst towards shoddy service providers is not going to get you anywhere.
There was a time in very recent human history when you walked out your door in the morning and you left behind your TV, your computer and your phone and everyone was okay. You don't need to be streaming media content to your device all day every day. If you can only afford 1GB, ration your use to get the most out of it.
Don't come here asking us to do stupid things to make your life easier. If you can't afford a better data plan, you can't afford to make it worth my while to rework something I made to use obsolete tech.
You must be a great guy to hang out with.
You said it yourself. What do you think the difference between these apps?
Also caching incorrectly will result in data inconsistency.
maybe they're sending monitoring reports for each movement?
We have a mobile app that works pretty well, but occasionally we'll get people that get frustrated with it because it's "not working".
When we look into this, it almost always inevitably ends up being because they were in a remote area with low service and the app couldn't load what it needed.
We try and tell them this, but they are adamant that it's just our app, which, it kind of it since we don't really handle low service failures.
How do you architect your mobile app to handle failed requests in low service areas?
Offline mode via repository. All network interactions pass through a local cache on their way back. Pretty common pattern and not hard to implement from the start, the issue is that it’s rarely implemented from the start and devs tend to try and piecemeal it into certain requests which never works.
Alternatively consider graphql which may help with over fetching and also has some native caching behaviours built in.
There’s also low bandwidth headers that you can use to dynamically adjust the response size based on whatever content sizes, this could be smaller images, smaller paginations etc etc.
Ultimately it really depends on the nature of your app, read heavy and content based apps have generally more options. Write heavy apps that strong consistency demands have less options.
I would first fix the ux via a banner that says ‘ poor connectivity’ and secondly try the different caching strategies above. ChatGPT will give you honestly great options on this
Thanks for sharing !
Np, good luck!
Our office had a separate wifi network that emulated shitty 2G with packet drop. It was really useful to be able to test while developing. Iirc it resulted in devs adding a ton of retry, placeholder images, loading animations, and reduced data sizes.
That’s awesome and I wish this was standard. I often curse devs from sunny urban California when I’m in the woods with spotty coverage and the app is very confused why the internet is unreachable.
When developing for iOS, you can do this via the Network Link Conditiioner preference pane and test in the simulator: https://www.avanderlee.com/debugging/network-link-conditioner-utility/
Make it offline-first. Your ui changes based on locally stored data updates. Your repo must have functions to fetch the data remotely, store them into the db, then another function to retrieve those data. Just make sure you set the return type of your fetch method in your dao to a kotlin flow. It should automatically propagate the updates/changes in your db back to your observers.
Offline-first or local-first architecture is the way to go. To get a great UX with a standard cloud-first architecture is kind of a rabbit-hole of complexity: https://www.powersync.com/blog/escaping-the-network-tarpit
If you go with offline-first instead and put a synced database on the client-side, it makes things fundamentally simpler, and the UX is great by default: all reads and writes happen against the local database, and state transfer happens in the background when connectivity is available. And it makes the network errors, loading states, etc. so much easier and less complex to deal with.
Biggest wins are in design. Notice people is ok with most financial systems working offline: they only register and queue inputs, you get feedback way later, they may have to re run all the transactions to re compute the state.
You can also make smaller changes based on this idea, what users want is to place "orders" and move to other thing, and get feedback "later" ij a way easy to read.
Because the pain point is often "mental overload", I don't want to be stuck watching your screen or forget what I need.
Hard to make a clear recommendation without more specifics, but a few ideas that come to mind:
If you're looking for some architectural guidance, maybe these articles give you some good terms you could then google more about:
https://medium.com/google-design/connect-no-matter-the-speed-3b81cfd3355a
Hi
Are there any optimization guides or performance help with the immich app for iOS and android? I have a iphone 15 and a galaxy s23 and in the app there is constant stuttering and freezing. On android it often pops up that the app is unresponsive and asks me if I want to wait or kill the app.
If I access the library web interface from a different computer than the one hosing immich the performance is perfect without issue, its only the apps.
- I am running the docker image of immich on my m1 mac mini.
- I have a library of approximately 74000 photos and videos.
The phones are connected with wifi to the same network as the mac mini.
I have tried turning on the "prefer remote images" setting but it seems not to make a difference.
I run the iOS app and like many was having performance issues. No one mentioned to try enabling the option to prefer remote images under advanced setting.
Simply enabling that option resolved the iOS performance issues for me. It’s now super fast no issues
Great! What’s its cons?
Personally I have not found any cons. If I think of 1 perhaps it would be that you would use more internet data when outside of home accessing photos.
I have a powerful server running Immich so that also contributes possibly to no cons found to this setup.
Gigabit internet for external connection
Striped 2x SSDs for storage of photos
i5 8400 CPU running docker container
The issue is with the sync mechanism happens on the main thread, causing the UI to be janky. We are reworking this mechanism, the progress is very promising in making the app really smooth regardless of library size
Amazing, thanks! When can we see it roll out?
Hopefully within the next two months
Same here. App is written in flutter. I write some apps in this framwework and I know people who write flutter apps for living. These applications will not have the same performance as those written natively. And it’s really hard to make it perform smoothly. Even simple apps struggle with it. I believe it’s easier for Immich team to write one code for all three platforms (iOS, android, web), but unfortunately it is at the cost of performance.
I totally agree, I made apps in flutter and even simple apps felt sluggish on iOS. When I made the same app in native it was 10 times faster.
The problem is with the sync mechanism running on the UI thread, we are working on moving them to the background and it’s pretty promising so far
It works slow while app is creating timeline. Once it's finished it'w working good. I think devs said that it's being worked on, that they are moving timeline building and other similar workload out of main thread so once it's done it should work good no matter how large pic library is.
Give a try also to logout/login back, that greatly improved the issue for me.
I’ve had slower wifi speeds for months even after installing the Deco Mesh Wifi System. So I decided to open the app and chose the “Network Optimization” feature just to play with it, turns out I was notified I have significant interference and that they would optimize my network and that my connection may be spotty for a bit while they do so.
I went from 90-100mbps download and 60mbps upload to 322mbps download and 515mbps upload. Idk if it was truly the network optimization feature or maybe they did some kind of network reboot that got my system to how it should be working, regardless, the increase in speed is enormous. It wasn’t a fluke, I’ve tested from multiple different devices both yesterday and today. The speed is higher, and I can notice it when streaming content as well. No more pauses during shows/videos or anything like that, when it used to happen literally EVERY time (every 20 mins or so). I’ve been watching YouTube all day without seeing a buffering circle once.
I clocked network optimize and now several of my smart bulbs are not working.
I’m interested in this. Every time I use network optimization on my Deco I lose Internet. I just installed cameras and a ring doorbell. That’s the only things that has changed. I know this is an old thread but did you find a resolution?
I didn’t sorry. I called TP Link and they said my $800 TP Link deco should be upgraded to a newer TP Link product … I dropped them and moved elsewhere
Is there a way to schedule the network optimisation instead of doing it manually?
👏🏻👍🏻👍🏻👍🏻👍🏻….das wäre eine wichtige Basisfunktion….
You mean there is not a function yet?
Just got a deco setup and this feature is great. Is this somewhere in the tether app for archer routers? I couldn't find it.
Why doesn't Deco do an automatic optimization when rebooted? And, add a scheduled optimization button like scheduled reboot of Deco Network? This would definitely improve user experiences and satisfaction.
Is there a way to schedule the optimization or can be done only manually?
Tldr at bottom for how to probably get better/more consistent ping
-
I am an OTR Trucker but also a gamer with what I like to think of as a somewhat sophisticated rig in my rig. I play a lot of competitive games like Dota 2 and League of Legends as well as fighting games like Guilty Gear: Strive, Street Fighter 6, and Granblue Fantasy Versus: Rising. (add me if you wanna play any fighters especially lmao)
Obviously, using cell towers for internet through T-Mobile isn’t ideal, but I’ve created a system that has tremendously improved my experience. I went from being able to play my favorite games at a hit or miss quality, to being able to play with no issues 95% of the time. I’m sure that this technique will help anyone who is mobile with their Home Internet like I am, but could also prove useful to those in fixed locations depending on a testable variable.
I’ve been meaning to write this for a while but I was lazy, so this guide is rough as I’m tracing back each and everything I did, but I will do my best. This is born out of the fact I couldn’t find a guide on how to do this from start to finish. Everything I found online felt like it described different parts but never the whole story. It’s likely there’s a better/more efficient way to do all this but here’s my take on it that I came up with:
-
The problem (usually): Bufferbloat
You can test your connection’s bufferbloat here: https://www.waveform.com/tools/bufferbloat. A caveat is that this may change depending on how much traffic is currently on the network, like what time of day it currently is.
Bufferbloat is the primary culprit behind why you can have 300 mbps down but still have ping spikes. An AI overview that sounds mostly correct from what I remember:
Basically, T-Mobile Home Internet is optimized to maximize speed for things like downloading and streaming, which works well for those activities but not as much for gaming.
Put simply, when TMOHI tries to use all available bandwidth, it fills up the connection's buffers—temporary storage areas that hold data before sending it out. When these buffers get too full, new data (like game commands or video call audio) has to wait in line, causing delays. This leads to higher ping and lag, especially in online games and video calls.
Apparently and basically the implication here is that if you use 100% of your download speed, you may experience bufferbloat, but if you limit it to under 100% then you’ll resolve some of the bufferbloat, from my understanding.
So, we need to reduce bufferbloat because that is what’s causing the ping spikes. However, TMOHI doesn’t let you do this by default, you need a separate router capable of running the SQM plugin which is designed to manage this. Now, evidently you can get this plugin on different types/brands/models of routers, but I looked at a list and got one at Walmart and ran into a ton of issues and didn’t get anywhere so I gave up and bought a GL.iNet router specifically for this that was confirmed to work because they apparently published the SQM plugin in the first place?? Anyways my point is you may be able to accomplish installing SQM on different routers but good luck I couldn’t figure it out.
-
The model router I got was the GL.iNet GL-MT6000 off Amazon. Once I had it, I connected an ethernet cable from the WAN port of the GL.iNet GL-MT6000 to a LAN port on the TMOHI box, connected another ethernet cable from a LAN port on the GL.iNet GL-MT6000 to my laptop, and went to http://192.168.8.1 which is the login of the GL.iNet router.
While logged into the admin panel, I set up the router as you would a new router, but in regards to our specific goal of reducing bufferbloat, I went to Applications > Plug-ins and per research on this website: https://forum.gl-inet.com/t/configuring-sqm-to-reduce-bufferbloat/14125 I searched for, downloaded, and installed luci-app-sqm and sqm-scripts.
Now we need to go to advanced settings, which you can find under System > Advanced Settings or 192.168.8.1/cgi-bin/luci. Go to Network > SQM QoS. This is the page where you’ll set the relevant settings to reduce bufferbloat. Essentially, you can set limits to your upload and download speed, which will reduce your bufferbloat and give you better ping. These are my settings:
The previous link mentioned the Interface name should be eth1 but I set all of mine to eth0 and it works so idk. Possibly refer to the previous link from gl-inet.com.
Essentially these are 3 presets, because remember that all you *need* to do is make sure you’re not using 100% of the download speed that you can, 90% is fine. (I think…) So I switch between these 3 on an as needed basis, because one huge dynamic to this is that you are limiting your download speed, essentially trading speed for ping. Switching between these 3 presets, I can try to irk out as much speed as I can while also getting good ping. Obviously you can adjust these to fit your needs, I imagine someone stationary might really want to fine-tune this.
Speed test with SQM off:
https://reddit.com/link/1jc7afx/video/h1vg3jiykxoe1/player
Speed test with SQM on:
https://i.redd.it/riloq9w1lxoe1.gif
Obviously, I lost a ton of download speed, but in this case enough to comfortably stream videos and browse the web as I alt-tab from whatever game I’m in. If the internet is super awful I may try another preset, essentially in desperation as now I won’t be able to stream comfortably but as a trucker I’m happy if it just gets me good ping.
-
Possibly unnecessary but additional steps I took, recommend doing them all tbh but w/e:
I also installed luci-app-qos and qos-scripts because QoS was often mentioned alongside sqm, but I’m not sure they’re necessary, but I’m just leaving the fact I did for record’s sake. From my understanding QoS is more relevant for households with multiple devices that want to prioritize a certain type of traffic within the household? Unsure if relevant to me, but maybe this could be a golden ticket I’m unaware of. Someone smarter than me may know.
I also disabled Network Acceleration under Network > Network Acceleration. For whatever reason when I had everything set up later on, it didn’t work, but disabling this caused everything to start working.While I was troubleshooting, in Advanced Settings, I added this text to System > Startup > Local Startup:
-
# Put your custom commands here that should be executed once
# the system init finished. By default this file does nothing.
. /lib/functions/gl_util.sh
remount_ubifs
/etc/init.d/sqm restart
exit 0
-
Apparently a problem that can happen is a cache doesn’t get cleared and the plugin won’t activate, and apparently this startup script is supposed to fix that issue? Anyways, posting it for record’s sake.
-
Hilariously, another benefit this has is the side-effect of ‘spoofing’ your internet as wired internet, which is relevant because it’s trending in some fighting games to detect if someone is on wi-fi instead of wired internet, and then sort them out of your matchmaking. By connecting an ethernet cable from my laptop to the router, I am tricking my computer into thinking I’m wired even though my internet is ultimately coming from T-Mobile cell towers while I’m in a Semi-truck. Lmao.
Though, I guess technically this means that you can connect from your new router to your system wirelessly and this would all work in theory, but I enjoy not being filtered in Street Fighter 6, especially when my internet is usually actually super good.
-
TLDR:
- Connect a SQM compatible (GL.iNet are usually compatible) router to your TMOHI box, and connect your system to your new router.
- Enable SQM on the router, probably by installing luci-app-sqm and sqm-scripts (possibly turn Network Acceleration off)
- Create at least 1 preset of SQM settings where you essentially throttle your download/upload speed while selecting either interface eth1 or eth0.
- profit?
I've messed around with it a little. Doesn't seem to really effect my latency. I'll see my download/upload be slowed down but no big ping differences. My speeds are also not super consistent. That could be a reason I suppose
So messing around a little more. I can get an A+ rating but I have to severely slow my download. I've heard setting SQM to about 80% of full speed is what's recommended. I have to cut mine down to about 1/4 or less of my full speed to get rid of the download latency.
Pretty neat.
I love that you gave a detailed writeup.
I would probably do this for playing some games, but there is one big issue I have.
My NAT status. I usually game on console. Xbox and PS. If I connect my xbox or PlayStation directly to my modem I get a moderate NAT, or type 2 NAT. Doesn't matter if I use wifi or LAN cable. If I connect to my router that is connected to my modem I get a strict NAT or type 3 NAT. I have no idea why it does it. But it does it everytime. This matters because the game I play the most online is Destiny 2, which is based on a P2P networking system. Trying to play it with a strict NAT is not fun. You spend about 10x longer trying to match make.
I'm lucky. I'm rural so my tower is never congested. I have a waveform antenna. My unloaded ping is around 35 ms and my loaded one is around 130 ms. Not terrible. If you look at a lot of the speed test people post and you will see loaded pings of 1500 ms or more at times.
I don't know how bad it is for me to play with that latency though. I've been doing it for a few months. At times it does feel like I'm behind everyone else. I need to pack up my xbox and take it to work and try it out soon. I have a couch in my office and a gaming TV, hooked up to my fiber. My ping on fiber is around 5 ms. Maybe if I try playing on both on the same day I can see what the difference really is like.
T-Mobile is CGNAT, sadly, as are most 4g/5g providers. You're behind their gateway, which then NATs (maps private IPs to public IPs) on their ISP gateway (router), then you NAT behind your own personal gateway,, which means, you'll never get a personal IPv4 address or access to port forwarding, which these games require. It matters little in this context since t-mobile won't let you port forward out of their main ISP gateway, but your'e also double NAT'ed - NAT behind NAT, which can affect NAT traversal/hole punching through two firewalls.
If you're wiling to get your hands dirty, there are guides on how to do port forwarding with a VPS (cloud server) such as https://github.com/mochman/Bypass_CGNAT , but keep in mind you're adding an extra hop in internet connectivity, which will affect latency. in some cases, it may be your only usable path to some games or self-hosting or torrenting (the latter comes with its own privacy caveats such as getting notices from your ISP for copyright infringement, YMMV). If you do go down this path, a VPS hosted in your city will deliver the lowest additional latency. You're basically using a cloud hosted server to port forward through their Internet connection and relaying it back to your home computer as if you had direct Internet access.
I am also currently at ~35 ms from the tower,, sometimes lower. In Black Ops 6 e.g. I usually get about 40-45 ms to server, wihch is pretty flippin' good considering it's all radio cell tower based. At a relative's AT&T fiber, their latency to the largest speed test servers/datacenters is something like 3 ms, which is insane.
I got zero clue what's up with that nat situation but it reminds me of how back when I used PdaNet to get internet for whatever reason any connection to Xbox servers and only their servers didn't work lol.
Was relevant because I was trying to play Gunfire Reborn on my pc with a friend on Xbox, but the cross play pc version of the game was on windows store and not steam.
Queue super bizarre situation where my PdaNet connection worked on the steam version of the game but not the windows store version lol.
Nice work fellow need driver !! Tmobile home had been a game changer
Be careful. Unless you are paying 160 dollars for the away plan. They can turn off your hsi at any point. I haven’t seen anyone report this yet. But can still happen.
Off topic but are you still using the home plan? I just got the $60/mo one a few days ago, but everybody’s saying they send you a cease and desist letter after a few months forcing you to stop using the home version (not the $160 away, talking about the home one) over the road. Is that true?
You're gonna have to mute your volume at the preparing resources page because they have a video that plays on repeat till it's done.
Same problem here,the only solution is update it in 12-2am depends on your location
Hi Everyone,
I have a HomeyScript in Homey that pings (fetch really) Google every 1 minutes:
const response = await fetch("https://www.google.com", {
mode: 'no-cors'
This is my crude way of trying to get a sense of my "Internet-health". Once an hour I get the sum or errors that last hour, X errors (out of 60 tries). I have a dashboard with this parameter and I follow it.
In the beginning I thought that a bad value mostly had to do with Homey's bad Wifi. Any hows, I went on vacation out of home for a few days, and Voila - the error rate was really diminished.
Ping Errors - Away and then Home again
What could be the reason to the low errors when I'm away and the spikes when I get home again? Devices that was away and then reintroduced was; 1 iPad, 3 iPhones and 2 Apple Watches. 2 Laptops were shut down during that period as well.
Many thanks!
Edit (added the error message when ping fails):
Google is not reachable FetchError: request to https://www.google.com/ failed, reason: connect EHOSTUNREACH 142.250.74.164:443
at ClientRequest.<anonymous> (/node_modules/node-fetch/lib/index.js:1501:11)
at ClientRequest.emit (node:events:526:28)
at TLSSocket.socketErrorListener (node:_http_client:442:9)
at TLSSocket.emit (node:events:526:28)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
type: 'system',
errno: 'EHOSTUNREACH',
code: 'EHOSTUNREACH'
}
Ping failed, new NumberOfErrorsToday = 8
I tried real command prompt ping to that IP and it did not fail for 5 tries so maybe Homey does this more error-prone?
Probable Google blocking this. It's not acceptable to ping a website every x minutes/seconds . This can be seen as a ddos attach, a network breach etc.
There are tools to monitor a website. Try pinging your own router to see if that says anything
I tried running Ping from a command prompt repeatedly for 1 hour without any problems or blocking. Looks like Homey is not up to the task.
What is the issue when you are at home? Your script is receiving disconnects, are you experiencing the same with normal use?
Why are you pinging? Did you try pinging something else, maybe Google dns, Google time... Etc?
Some more background information is needed as why are you doing this.
Are the devices you're introducing using a lot of upload bandwidth relative to what your connection supplies?
They are more downloading than uploading. I'm using a 250 Mbit/s and Unifi UI doesn't show any problems.
Hello! Thanks for posting on r/Ubiquiti!
This subreddit is here to provide unofficial technical support to people who use or want to dive into the world of Ubiquiti products. If you haven’t already been descriptive in your post, please take the time to edit it and add as many useful details as you can.
Ubiquiti makes a great tool to help with figuring out where to place your access points and other network design questions located at:
If you see people spreading misinformation or violating the "don't be an asshole" general rule, please report it!
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
Happens to me too and in my case it’s caused by my internet service provider. Using 1.1.1.1 app helps.
What’s this? How does it work?
Change your router’s DNS server to CloudFlare.
The app 1.1.1.1 by Cloudflare is a DNS resolver. Sometimes your internet provider or Wi-Fi setup routes your requests inefficiently, which can slow down downloading. When you use 1.1.1.1 it helps your phone connect to Apple’s servers faster or more directly.
Dude this fixed it. Thank you!!!
You’re better off without instagram 🤣
Maybe App Store issue
You’re better off without Reddit too but here we all are.
Tried with some different DNS or any ISP armor service you have?
I’m having the same problem. I hate it
Download the app 1.1.1.1
Just wait 7 hours and 1min
Seems like people around here don't like jokes ¯_(ツ)_/¯
I don't know what to do about delivery optimization. It destroys our speed for over an hour sometimes. I've disabled it in services, I've disabled it in regedit, Ive tweaked its settings to allow it the absolute smallest amount of data possible- and it takes all it wants anyway. We only have 1.2 speed and it takes 7Mbs of that, sometimes more. Even youtube won't load, let alone work.
waaaay back I used NetLimiter to limit badwidth usage
How about adding any router with the features you need to the chain? You have control there that nothing else can interfere with.
Hi u/Malesto, thanks for reporting this bug! The proper way to report a bug to Microsoft is to submit it in the "Feedback Hub" app, and then edit your post with the link, so people can upvote it. The more users vote on your feedback, the more likely it's going to be addressed in a future update! Follow these simple steps:
Open the "Feedback Hub" app and try searching for your issue, someone may have already submitted similar. If not, go back to the home screen and click "Report a problem"
Follow the on-screen instructions. Make sure you include as much information as possible, and try to include screenshots and use the recording feature if possible. Once done, click "Submit".
Click "Share my feedback" and open the feedback you submitted
Click "Share" and copy the unique link
Edit your Reddit post and paste the link you just copied
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
For updates you should be able to schedule them to only run at times that are convenient such as when you are asleep.
If none of the things you've mentioned work, then i might have something else. The collateral damage here is that this will impact any core Windows service, but should otherwise be fine.
Open up a PowerShell as administrator and input the following commands:
New-NetQosPolicy -Name "svchost 443" -AppPathNameMatchCondition "svchost.exe" -IPDstPortMatchCondition 443 -ThrottleRateActionBitsPerSecond 128KB
New-NetQosPolicy -Name "svchost 80" -AppPathNameMatchCondition "svchost.exe" -IPDstPortMatchCondition 80 -ThrottleRateActionBitsPerSecond 128KB
This will, as hinted above, cause svchost.exe (which hosts Windows Update) to do HTTP(S) downloads with a maximum of 128KB/s. You can adjust this value however you please.
You'll have to do this on each device in your network.
To see if this worked, you can use Get-NetQosPolicy -PolicyStore ActiveStore
, which should display both rules.
You can remove the bandwidth restriction with Remove-NetQosPolicy -Name "svchost 443"
and Remove-NetQosPolicy -Name "svchost 80"
I’ll give this a try!
So i finally got round to sending some screen recordings to support, documenting the slow performance I regularly experience in the Android app. Here are some numbers for that in a single session, connected to a US-based VPN run by a large organization stateside (as advised by support, since I am located far from the US):
- 6 seconds to display a 20KB pdf attachment
- 2+ seconds to display an email in the app
- 5 seconds to display an 18KB png attachment
- 2+ seconds to display thumbnails from multiple attachments in the same email
- 4 seconds to download a 19KB docx attachment
These are not great numbers and for a direct comparison I forwarded the email from the first bullet to a Fastmail account and used their app to open the exact same attachment under the same conditions within a few minutes of timing it in Hey. It took under 2 seconds.
I've already sunk a lot of time into this so am not keen to do a much more controlled experiments but ~3x snappier performance in Fastmail does seem consistent with what people leaving Hey for Fastmail tend to talk about on this sub and the Fastmail one afaik.
Also tbc after quite a bit of back and forth with support, they admitted that what I am experiencing is expected behaviour given network conditions, namely, a consequence of Hey's backend design which does not apparently affect Fastmail's performance.
This is disappointing, of course, as I'm quite enthusiastic about other aspects of the service, I use it all the time as my primary means of external communication with multiple addresses across a couple of large institutions, and find many of the features (not all of them but many) quite clever and well-made. I also appreciate the company's focus on privacy etc and I prefer supporting small-medium software companies rather than the giant Apple/Google/Microsoft oligarchs.
I'm not sure at this stage if I will continue using Hey. Performance issues like these, which affect users most who are physically far from wherever in the states Hey servers are located, seem to be an outcome of Hey not relying on common content delivery networks like most apps do. If Hey put some servers closer to me, perhaps the app would be snappier — although given the number of people complaining about performance issues, it seems unlikely that distance to servers is the only issue here.
Performance can in principle be improved, though, so maybe if I wait and see a bit, I'll be pleasantly surprised....?
Yes this is a co sequence of HEY’s infrastructure choices.
Getting high performance on web apps typically means locating serving closer to customers. Usually this comes in three forms:
Doing all of this well for the whole world costs a lot. Just the first one means multiple machines in most major cities, or at least the top 30ish.
The problem is that HEY is just not a very big service. They can’t justify doing this.
And that’s where the cloud comes in. You don’t need a whole server in every city, you can have just the small slice you need. Yes you pay a premium, but it’s way cheaper to build reliable and performant services on cloud providers, especially when you account for the human costs of running it all.
HEY don’t prioritise this. They design for the US English market, they host exclusively in the US (no EU data sovereignty), they only price in USD, and therefore their product is lower quality for those too far from the US.
FastMail on the other hand are just in a completely different ballpark. They’re a much bigger business, available in many countries, just playing a totally different game. Not quite what Gmail is playing of course, but still.
This is all quite simplified, happy to go into more detail, but this is what I do for a living. I’ve moved a company off its own machines onto a cloud provider, and I now do performance and reliability engineering for services far bigger than HEY (or FastMail for that matter).
Interesting. I find Fastmail atrociously slow, and their recent updates without informing customers was a really poor decision.
No email service is perfect, and Hey checks enough boxes for me that I'm sticking with them.
Not me. Fastmail is incredibly fast
Wait, what updates? 😱
They’ve spoken recently about moving away from the cloud and I believe I read somewhere that they’re about to move their S3 buckets to their own storage too.
Which country are you in?
Are you blaming AWS S3 for the performance issues here?
No. I’m wondering if attachments are already loading from their own datacenter.
Have you shared any of this with the HEY team? We can’t fix it. They may be able to address.
Did you read the post?
Yes. Did you?
post this on twitter and see how defensive dhh/jason can be
I don't have a means to post there anymore as I wiped and parked my account (bye to my 10K followers lol) as I have little patience for the ugly sorts of hate speech that I kept seeing in the months after Elon Musk took over the platform. most people in my field moved to bluesky and are happy with it, nice product.
anyway I haven't interacted with Jason Fried before but DHH sent a friendly reply about this current issue overnight and we're having a chat. it's looking pretty clear that the poor performance I'm experiencing is a result of physical distance to the two server locations Hey uses. Presumably Fastmail et al are more cloud-based and so there are faster routes to 'grand central' for those apps. Will be interested to see how performance improves in the next while, though, as this is really my main gripe about the service, which I otherwise find well suited to my use case (and by and large support has been helpful on other issues — except when the issue is expected-but-non-ideal behaviour)
optimizing app for slow networks
Key Considerations for Optimizing Apps for Slow Networks
Data Compression: Implement data compression techniques (e.g., Gzip, Brotli) to reduce the size of data being transmitted. This can significantly decrease load times on slow connections.
Lazy Loading: Use lazy loading for images and other resources, loading them only when they are needed or visible to the user. This minimizes initial load times.
Caching Strategies: Utilize caching to store frequently accessed data locally. This reduces the need for repeated network requests and speeds up app performance.
Minimize Requests: Reduce the number of network requests by combining files (e.g., CSS and JavaScript) and using techniques like bundling and minification.
Optimize Images: Use appropriately sized images and modern formats (like WebP) to reduce loading times. Consider using responsive images to serve different sizes based on the user's device.
Progressive Enhancement: Design your app to provide a basic experience on slow networks, gradually enhancing it as the connection improves. This ensures usability even under poor conditions.
Error Handling and Retry Logic: Implement robust error handling and retry mechanisms for network requests to improve user experience when connections are unstable.
User Feedback: Provide visual feedback (like loading indicators) to inform users that their request is being processed, which can help manage expectations during slow network conditions.
Recommendation: Prioritize implementing data compression and caching strategies first, as these can yield significant performance improvements with relatively low effort. Additionally, consider conducting user testing on various network conditions to identify specific pain points and optimize accordingly.
Get more comprehensive results summarized by our most cutting edge AI model. Plus deep Youtube search.