This is death kiss to indie developement.
But paradoxically it is great. Killing interoperability is nail to coffin. This brings more and more focus to alternative solutions out of Google market, especially in independent software area. Like yt-dlp, FreeTube, F-Droid - actually all my family uses them and I recommend it to everyone. I can't wait to get some alternative GDrive client lib which simulates browser to throw data over that garden wall, and I don't care if it nags with captcha. The more hassle the more people are going to hate that ivory tower.
To be clear, the problem with Cambridge Analytica was not Cambridge Analytica. The problem was - and still is - Facebook's habit of getting everyone to overshare and self-surveil. There needs to be some control and vetting over the apps that have access to your data but not so much that actually honest developers are quitting the game.
My guess is that Google just doesn't want third-party clients (you can't shove "AI" or "Investor Advertising" into it), so they're slowly turning up the heat by abusing the data scare.
My Facebook account is largely limited to information that’s already largely public. I imagine there are Google Drive accounts out there with tax returns, health records, background checks, etc in them.
Yes, this sucks that it puts road blocks for well meaning developers, but for the general public, it’s pretty hard to tell who is a well meaning developer and who isn’t. Also, inexperienced or careless well meaning developers can still accidentally put your data in a public internet facing DB.
Facebook provided a general API for apps, not some kind of data feed. The API required user consent from the app user, though almost certainly not informed consent.
The API also provided too much data, in particular on the user's social graph, which is why a single user giving uninformed consent would lead to data being extracted for multiple others. But even if the app had informed users about intending to steal the social graph, most users would still have consented. They would not have read the text, or not cared. Just click ok until the computer lets you do what you wanted.
So we really do know that the only way to safeguard the data is to design safe scoped APIs for the typical use cases, and keep dangerous unscoped APIs around only as an escape hatch with much stricter security and safety requirements.
Nowerdays this seems like an incredibly dumb idea, sure, and personally I disabled it entirely the moment it came out. But we can cut them some slack, because back in ~2006 facebook was a new thing, for young people - and nobody was sure where this new "social media" thing was going to go.
On top of that I believe Cambridge Analytica did the usual "personality test" trickery where you fill out a survey, then it won't show your result until you hand over your details and accept some legal mumbo-jumbo.
So your Great Uncle wanted to know what harry potter character he was, clicked a consent button, and Cambridge Analytica got your PII.
These are illegal otherwise, but very useful for journalists reporting on political matters.
I would love for as user friendly way to just use Backblaze or some other S3 compatible provider as my drive.
Edit: I guess that's sort of exactly what Transmit does, but I want something that is simple enough that anyone can use it.
You do have to know what a file is and what a directory is, mind you, which is something I can non-ironically say does rule out half of GenZ or anyone else raised in the postmodern era, where 'content' just lives 'in' an 'app' and can be searched for (and if you're lucky, found). But I don't think people of that minimum level of sophistication are in the market for products like Backblaze or S3 - they're just out there paying for more iCloud storage (or new laptops) because Apple said they are out of space.
Google is a drag.
You can do that self hosted or via fastmail or similar
This is a bit naive, a very small percentage of people would be interested in these alternative solutions. Most people don't even install any third party software on their computer and just use the browser for everything.
The quota increase process is roughly:
1) Fill out the same form every year from scratch
2) Send it into the black hole that's Google "support"
3) A few weeks later receive a reply from someone asking a irrelevant question to our use case
4) Two weeks later another person replies asking for screenshots of the "implementation", so you send a screenshot of "func storeTrailerMetadata()"
5) Another two weeks later, another automated person replies that you got approved.
I understand that level-1 support for these orgs are basically documentation librarians. Cool. We pay an incredible amount for premium support, but whatever. It's fine. What matters is that we have a rep that is engaged and cares about us being unblocked and isn't going to let us flounder for issues their support team is not going to solve. Have never seen this level of commitment from Google.
It's rarely a complete black hole, and I have spoken to product engineers and owners for multiple lines.
The entire process was awful.
A few days later, the Google engineer assigned notes to us that we can escalate to P1 if this is blocking our workflow, even when not in production. I take this to my manager and they agree that it’s time to move it to P1.
We move it to P1 and immediately get traction, only to be stonewalled by a support engineer confidently asserting that the code throwing the error, which only existed in a private Google-maintained container, which only interfaced with our app through launching a cloud job through their platform, was actually our responsibility.
No joke, they actually said “As stated in my prior message, this issue is due to your code”, despite our code being a thin wrapper around their demonstration code to run it from the command line.
In my most business professional tone, I tell them off for lying to us about them debugging on their end and inform them that I will be immediately escalating because of this dissatisfactory response. This finally gets the ticket moving and a few weeks later, a bug fixed version of the entire platform is deployed.
Total time from start to finish:
- P2: 2.5 weeks of daily updates - P1, until we’re told that it’s our fault: 8 hours - P1 escalated until issue was completely fixed: 5 weeks
We paid for premium support. I cannot imagine how bad free support is.
I know everyone loves to dunk on Google, and I definitely agree their communication and customer service to app developers is shite, but this change to permissions scope is a good thing. If you have full, unfettered access to large number of people's Google Drive data, you're a huge target for malevolent actors. If you can't afford the new audit requirements (which I've done and are quite easy - if anything I'm sympathetic to the argument that they're more "box ticking" than valuable security audits), then I'd really question your ability to appropriately safeguard so much critically private data. For reference, these audits are about 1/20th as complicated as a full SOC 2 audit, for example.
FWIW I'm not previously familiar with this Transmit app, but based on their use cases (e.g. backup) it sounds like the limited "drive.file" scope wouldn't work for them. Still, if you want complete, unfettered access to my entire Drive account, I don't think it's a bad thing that Google is enforcing some minimal security standards.
The certifications themselves are valuable, but Google’s main issue lies in its poor communication and support. Third-party developers, even those paying $60k annually for re-certification, struggle to get timely responses or any at all.
What’s ironic is that the very partners handling these certifications often avoid using Google themselves because it’s “unreliable if something unusual happens.”
And that’s the crux of the issue—when things do go wrong or something unusual happens, it’s incredibly difficult to resolve.
Of course security is good, but this is just hindering third party access.
Panic never got complete or unfettered (or any) access to my Google Drive. I got access. I used their application, which can easily be supervised with Little Snitch or other software to prove that is not sending a copy of my credentials or my files to Panic. If it were OSS it would be even more categorically provable that it's not giving access to anyone but the end user, but these draconian requirements would still apply.
The point is, Google is telling THEIR users, not Panic, that they aren't qualified to use their own judgment to select a client. It woudl be just as bad as Microsoft saying that if you want to check your email or access SharePoint you can't use anything but Edge (insert jokes about how they basically did do that 20 years ago with MSIE, but let's be serious, that sort of thing would be rightfully mocked today).
> I don't think it's a bad thing that Google is enforcing some minimal security standards.
These certification programs are 100% a moneymaking program to engage in a lot of box-checking, which I'd wager has zero correlation with a positive outcome for anyone other than the shareholders of the "labs" that do these audits.
The oauth scope https://www.googleapis.com/auth/drive.file [0]basically allows this. If memory serves the app can use this scope, create a folder, and have access to things within that folder, it can certainly have access to all files created via the app (which should in general be true for iA and probably also Transmit). Offhand, I don't actually see what iA or Transmit are doing that needs the broader scope, though TotalCommander, trying to be a replacement file manager would still need the biggest scopes.
[0]: See https://developers.google.com/drive/api/guides/api-specific-..., the drive.file scope is non-sensitive so it needs a much more cursory approval process
The main thing I was thinking would be beneficial is getting user confirmation at better than the whole drive level. I think Google is trying to prevent cases where a third-party stores tokens on their servers which are breached, and in that kind of scenario it could be useful to push for scoping so e.g. if iA were breached the attacker could get your screenplay draft but not the folder where you backup your password manager or financial data.
Because large companies that can afford it have proven to be exemplars at safeguarding private data?
Certification schemes like that don’t have a good track record.
Edit: Next logical step is auditing every IMAP client before you can connect it to Gmail. Ridiculous.
This isn't a theoretical concern. It's pretty much exactly what happened with Cambridge Analytica. Facebook didn't really do anything wrong; they provided an API for data access, people explicitly authorized an app with broad access their data, and it turned out that the app was basically a trojan horse for data collection. And politicians, the media, the general public, and even the technologically savvier people who should know better all blamed Facebook for this.
That is, the vast majority of people whose data was sucked up by Cambridge Analytica did not explicitly authorize the app. Instead, their friends did, and at the time authorizing a third party app meant the app got to see everything you did, including all of the data about your friends. Now, you may argue that if you share your data with your friends that you're then at the mercy of whoever they give this data to, but I guarantee very few people at the time understood this - saying "I authorize Bob to see my FB data" is different, in most people's minds, to saying "I authorize Bob to see my data, and also any random app that can convince Bob into giving them access." Facebook was rightly pilloried for this permissions model.
Actually .... They're not that far away from that, if they're not already implementing it. Office365, and Google, if they haven't already have disabled basic Auth for IMAP/SMTP, and only supporting oauth2. Which requires a AppId/ClientSecret handed out out by registering your app with Microsoft/Google.
It seems that you can still steal thunderbirds appid/clientsecret from their open source code, for now ( https://simondobson.org/2024/02/03/getting-email/ ) , but ......
No idea if that's what google is targeting here, but that is a cloud service, that presumably gets a copy of people's Google Drive OAuth keys if they use Google Drive with Transmit and the sync service.
It’d be one thing if Project Zero was running serious audits but this policy is designed to let them check audit checkboxes so when you lose data, it’s hard to sue Google.
There's a reason why airgapping is the only way to secure important systems (and of course that can also have a number of vulnerabilities).
And besides, how do you know it's a local only app if you haven't audited it?
"Just trust me bro" -- some dev
If you use some of the lower-tier CASA labs, it's not that expensive (4K/year), but it is definitely a nuisance for a pure desktop plugin like ours that has absolutely no cloud component (other than connecting to GDocs).
Yes, assessing the trustability of apps is important. No, I don't trust Google to do it properly. Maybe I didn't choose Google because I find them the best, but because I have to (because Google, surprise surprise, forces itself down the throat of everyone, so the people I want to collaborate with use it).
Did my apps certify Google as a trustable provider ?
Google is not "forcing itself down the throat" with Google Drive, and even my Android phone comes with 3 cloud providers.
And yes, your apps certified Google as a trustable provider when they added support for it. Such support is not automatic, it requires non-trivial effort, and presumable no one would do it for services they do not trust.
Are you talking about the technical support (ie implementing APIs) or the bureaucratic support (ie going through Google's process) ? Because the first one is a result of Google going its own way with its own protocol, and the second is entirely a decision of Google.
> and presumable no one would do it for services they do not trust.
No, they would not do it for a service that is vital for the sustainability of their app. When Google is so hegemonic it's sometimes impossible to avoid, app developers must consider whether Google's ways are worth implementing not just based on Google but on the users' willing to make do with an app that doesn't work with Google. Not being compatible with Google is more often than not seen as a problem with the app, not with Google.
> the people I want to collaborate with use it
If I were an independent individual who didn't need any others, then it might be a decision I can make. That's the neo-liberal lie you are driven to believe. But we're always part of a society and can't exist without society. Some of the information I want to read has been elaborated and written for years in Google Drive. Some of the people who want to share stuff with me will only use Google Drive. Of course I do all my best to migrate them away, but it only works that much.
Did you read the part where it took multiple months to continue because of slow replies and non-working tooling from Google's side?
It's also pretty expensive for a relatively niche app, it might be fine if you are Dropbox or a big VC funded Mail app but for smaller companies it's not "easy".
> I don't think it's a bad thing that Google is enforcing some minimal security standards.
How would Google find out if the version that they are "scanning" is the same one that gets uploaded to the app store on every small app update? Zero, so there's no security benefit.
It raises the bar for low effort hackers and improves security.
I disagree with the op. Sorry mate go through the casa audit and get the access .
Especially because they'd now have to go through an other third-party to perform the audit process (not just the security lab, the entire thing), according to the total commander folks[1] that's 75k/year/program.
There are meaningful ways you can improve the security of your app. There are ways to make sure your app passes CASA. I found very little if any overlap between those two when going through the process.
What a racket. Smells downright anti-competitive The EU will have fun with this when it catches up.
There's nothing like a racket here. The list of certification agencies goes from KPMG at top end to smaller companies.
You don't get to interop with one of the biggest cloud providers in the world unless you complete commercial audits with one of their partners.
Given the kind of collusion Google's shown itself capable of [1] do you really think this is all fair?
[1] https://en.wikipedia.org/wiki/High-Tech_Employee_Antitrust_L...
I don't think you know how the EU works.
What? The EU wants to introduce certifications for all products and services, further kneecapping local innovation through regulation and costly certifications.
https://digital-strategy.ec.europa.eu/en/policies/cybersecur...
If I don't get one of these mandarin-approved certifications, will I no longer be able to do business? [The Google audits are a hard barrier to connecting to their cloud platform]
It's perhaps a difference between prescriptive and descriptive.
The big question here is if all this was preemptive or the response to something.
Which is why anyone and everyone should flat out avoid them as a company.
With Google Drive now being at the center of so many companies for storing business data, I am certain it is a juicy target, and third party access with full access to read and write to that big hard drive full of proprietary data is one that I would understand want to lock down... but not like this?
I don't think the market is anywhere near to shifting where business are going to dump google drive en masse, but as the ecosystem shrinks because so few companies can afford the cost to play in google's backyard, it does make me wonder how many companies are going to absolutely resent google, comparable to the way they resented oracle.
Could be a Google Workspace policy where you can just set that employees can't access the corporate Drive account through third party apps, while it continues to work for personal accounts.
Entire companies have been destroyed because they rely on Amazon, Google, or some other service, and then have the rug pulled. Sometimes companies have even been destroyed, notably by Amazon, for having the wrong political viewpoints.
My rule of thumb is: Only use open source components, and only run my stuff on Linux. So that way I maintain full control over my stack, and stay mostly immune from the political rug pulls, and other kinds of rug pulls.
Ok, I'll ask: what company did Amazon destroy for having the wrong political viewpoint?
AWS hosts some pretty vile stuff without blinking. The last time a company made a big "woe is me, my ideas are being suppressed" claim against Amazon, it was Parler, and they weren't kicked off for their viewpoints. They were kicked off for operating a crime-ridden site with zero effective moderation.
Now twitter is doing badly, but that's not because of their political slant, it's because they're operating the business with ideology first, business acumen second approach. That's not politics, that's plain old bad execution.
About Musk, once he took over Twitter, that mostly solved the Social Media "Free Speech" problem, because as long as the most popular gathering place in the world is free we're [mostly] all free. So you're right, there's lots of reasons for Conservative optimism.
None of those businesses are part of the culture war, or if anything they're 'woke' businesses.
Thiel doesn't make money being conservative, he makes money on venture capital and running a big government surveillance business. Larry Ellison makes money price gouging on licensing. Elon Musk's main source of income is a 'woke' EV business. Rupert Murdoch makes money on tabloids, of which Fox News is just one flavor - and the tabloids are arguably 'woke' as they mainly pitch conspiracy theories that the 'normies' don't know about.
Twitter under Musk is no more free speech than it has ever been. They comply with the vast majority of censorship requests from authoritarian governments[1]. They most recently rolled over for Brazil's demands and are now fumbling their execution on paying the fines Brazil levied against them. Musk also censored Ken Klippenstein's account when he published a link to his Substack article about the JD Vance opposition research dump. So no, "free speech" on Twitter is just a slogan and a marketing campaign for low information news consumers, and it's working.
[1] https://english.elpais.com/international/2023-05-24/under-el...
Insofar as Musk not being a Free Speech Absolutist (which I'm not either), that's not news to me or anyone else who knows and respects him like I do. Musk is doing everything he can to keep legal speech from being censored, whereas the prior owners would cancel people permanently for trite trivial things like a mis-gendering (that wasn't even done out of malice), or simply claiming there's two genders.
Regarding Democrats getting censored themselves: While I'm a strong advocate of Free Speech as a general rule, I think after what the Democrats did for a decade (on censorship) they SHOULD be forced to reap what they sowed. So for example, I would've been perfectly in favor of Musk, as a one time act, permanently cancelling everyone who had in the past called for censorship of others. Those people didn't want Free Speech when they held power, and thus they are the ones who don't deserve to have Free Speech after they've lost their power.
You could see this as good for the consumer in cases where the abuse is bad but the store I was at in the 00s got kicked off for selling some Martial Arts equipment legal in 47 States but on a naughty list we were unaware of. We listed it in a few colors and that was enough to get kicked out.
we can have better standards for speech and platforming than "you didn't moderate enough".
So Cloudflare wasn't bravely standing on principle, they were just doing garden variety collaboration with the feds.
[1] https://www.computerworld.com/article/1386577/us-seeks-lenie...
They literally do not say it is. Citation needed. Quote what you're referring to.
You can be obtuse about it if you want, but this is basically what it means. The only innocent people that the documents could've exposed would be related to american intelligence agencies
One thing is for sure though, AWS has never terminated a website because they exposed Russian intelligence documents, or because they made non american classified documents public. If you are american, then you can obviously play dumb here but it is blatant for everyone else.
And even beyond that, caring about the clearance level of a document is inherently political, and they explicitly say that it was one of the reasons for their decision to terminate WikiLeaks' hosting.
And again, AWS cited the fact that the documents were classified as being one of the reasons for the termination. You can't get more political than that. Especially when AWS does not care about it when it happens in other countries.
> People on Parler used the social network to stoke fear, spread hate, and allegedly coordinate the insurrection at the Capitol building on Wednesday. The app has recently been overrun with death threats, celebrations of violence, and posts encouraging “Patriots” to march on Washington, DC, with weapons on Jan. 19, the day before the inauguration of President-elect Joe Biden.
> In an email obtained by BuzzFeed News, an AWS Trust and Safety team told Parler Chief Policy Officer Amy Peikoff that the calls for violence propagating across the social network violated its terms of service. Amazon said it was unconvinced that the service’s plan to use volunteers to moderate calls for violence and hate speech would be effective.
Parler was used to coordinate the Jan 6 attacks, and when they were caught with their pants down they promised some half baked scheme to have unpaid volunteers do moderation. It was demonstrably a joke and they were caught failing to moderate more attack planning that was happening out in the open on their app. I think Parler leadership got off easy on this, they frankly should've been in jail on January 7th for being accomplices and not merely getting kicked off AWS.
[1] https://www.buzzfeednews.com/article/johnpaczkowski/amazon-p...
But regarding your last paragraph. Sure let's agree that everything you said was right. So what? It still shows that AWS does cut off consumers based on politics. I'm not aware of any legal action against Parler so I don't think they were accused of anything illegal. The fact that you agree with the political reasoning behind the decision does not make it any less political.
Especially since the only time that they ever intervened for something like this was when it happened in the US. It didn't happen during the Arab Spring, or the 2014 Ukrainian revolution, or any other time where people used an AWS hosted platform to coordinate a coup.
Other big social media companies generally own their own infra, so they they don't need to get into existential crises when their landlords go looking into their activities.
> But regarding your last paragraph. Sure let's agree that everything you said was right. So what? It still shows that AWS does cut off consumers based on politics.
Not sure how you're able contort this argument together. Parler was involved in crimes. AWS didn't need to prove that beyond a reasonable doubt like the justice system did, they merely had to have a good faith belief crimes were happening on Parler and Parler wasn't making good faith efforts to mitigate them. They didn't merely fail to moderate, they basically told Amazon to kick rocks when they were provided with evidence of crimes on their platform.
It's honestly kind of insulting to cry about political repression when it was just garden variety crime the whole time.
I agree with you about other social media platforms owning their own infra, by the way. But I'm not sure if that supports your point? If the only difference is that they own their own platforms, meaning they can do whatever, doesn't that show that AWS is actually unreliable for products like these? Which is what OP was arguing?
Also, it's weird to say that I was crying about political repression. My point was that your comment itself was arguing that they were still removed for political reasons. Which meant that you agree with the person you replied to, it's just that you think that it was morally correct which is besides the point.
And if Parler did commit a crime, or crimes, surely that would be public knowledge? Jan 6 lead to a rather intense series of prosecutions, so you'd think Parler would also face criminal charges. Unless you meant that it was used for criminal stuff, which is true. But that's a completely different standard, and one that AWS only applied to Parler (for obvious reasons). If you are saying that enabling criminal activities is a crime, then that would apply to other social media too (regardless of if they own their infra or no). Yet again, Facebook or YouTube has never been charged for anything like that.
It's totally fine since AWS was within its rights to ban them, but it's weird to argue that it had nothing to do with the politics of the situation. Again, AWS does not care about coups outside the US, which are just as illegal.
2) The other problem regarding censorship is that it has to be done by humans, and humans are not objective and benevolent. All humans will apply their own political ideologies towards their censorship decisions. This is true because your sense of morality is involved. That's what happened at Old Twitter. They were all Silicon Valley leftist moderators, and so they deemed conservative speech "immoral" and kicked people off for things even as mundane as misgendering or mere "impoliteness" to some "protected class". It got WAY out of hand. Thank God Musk came along and freed everyone.
Almost EVERY single conservative that was kicked off Twitter (before Musk bought it) was kicked for perfectly legal, and often perfectly polite, political speech.
> Only use open source components, and only run my stuff on Linux
Most people don't have the luxury of never having to interact with Google Drive, MS Teams, Slack etc.
EDIT: Of course if you're sure your politics are completely left-leaning you'll have no censorship worries, because these platforms are mostly Silicon Valley run. Also since conservatives basically don't play dirty in this way, the conservatives won't censor stuff just because it's left-leaning. We're for protecting freedom of all legal speech and actions.
For things like a convenience integration, that moment may never come. For other things, it's easy to estimate wrong, given how fuzzy the risks are.
Though this was a nice and welcome feature, it wasn’t Transmit’s only feature nor even its main one. I don’t think this sentiment applies, exactly.
I assume the retail side, not the AWS?
"End of the Road for Google Drive in Transmit"
The being unfamiliar with Transmit the "and" gave me a startle
Also believe that Unison was the perfect NNTP client and I miss it still.
It's all still pretty worthless though imho.
Imagine what would happen if we didn't have fuses and circuit breakers everywhere we use electricity. Any defective toaster could bring down the entire grid it was plugged into. There would have to be entirely new teams of inspectors and certifications and laws about even plugging in a lamp.
Madness
Yet here we are, 40+ years after collectively deciding that MULTICS was too complicated, and Unix was the way to go.
I thought we'd be able to see the folly of our ways by 2025 at the latest, but it's not going to happen. 8(
- Google maps appears to be designed by non-drivers. Much of hte time it is impossible to find out the name of cross streets near one's location by zooming in. Pins get added accidentally and are hard to categorize and find, there is no notion of neighborhoods, and the voice directions say the same redundant thing over and over (and it is often misleading). No intelligent person could design the product that way if they actually used it.
- Google's parental control features in android lack granularity, and the bias is toward kids watching garbage content as there is no way to share curated lists or for creators to become curators of high quality youtube content, etc. For anyone with young kids this is a must have feature and Google has ignored this kind of thing for years. Also if your kid's phone dies there is no way to remove it from the FamilyLink app! Someone really tested it thoroughly!
- Google Home / Nest. Exceptionally buggy devices. Basic functionality like shared speakers (all Nest over Nest wifi) are buggy and slow. "Hey Google" takes an extra few seconds to respond compared to Alexa and none of it is compatible with Google Advanced Security (Google's own feature!). Nobody building this tech is using it at home or else they would be furious about these big oversights.
- Gemini in Gmail is a total dud. It can't tell me what upcoming events are listed in my email inbox. It biases toward searching the inbox, and GMail inbox search has been highly broken for years. I participated in a user study at Google a while back and the PM admitted it was broken and would not be fixed.
Google is now a cash cow advertising business and thanks to Eric Schmidt (a brilliant but morally lacking individual) it has become a major defense contractor.
Thanks to OpenAI and others, Google search is already dead. The market hasn't caught up with this yet. I sincerely regret making gmail my main email, as the company seems to have completely lost its way. In spite of a lot of brilliance the lack of empathy with users and the need to deliver products that solve problems continues to persist.
- good engineers and managers earn well, especially in the US
- they want to own premium products
- where Google and Apple compete, Apple has gone for the premium end and Google has gone for the mass-market end
- speculation: thus Google employees aren't living in their own ecosystem
If it’s a desktop app, it should be possible to generate an api key and pass it to the desktop app, that’s it. Google doesn’t know or care about who’s using it at that point.
The security flow being discussed should impact cloud-hosted applications.
What am I missing?
> Google doesn’t know or care about who’s using it at that point
is incorrect.
Google Drive uses OAuth. Users don't register API keys, apps do, and then users just log in with their Google account.
Google now requires apps to go through manual approval to actually use their OAuth keys, if those apps request certain endpoints. Doesn't matter if the app is local or cloud-hosted, if it makes certain REST requests, it needs special access, and Google controls which API keys get that special access.
See https://developers.google.com/drive/api/guides/api-specific-...
I appreciate that this seems to be some additional security for drive access which is ostensibly a good thing but it doesn't seem like the review is very useful or catches any bad actors or errors.
“Eventually, by reaching out through friends of friends of friends to find someone inside Google who could help, we got in contact with a Google employee who was very helpful in getting the process started.”
Sounds exactly like dealing with an incompetent government agency.
It does disgust me that Google is going this route. I wonder how much influence is coming from governmental agencies. It is possible they are being forced in some way based on some kind of KYC-like requirements. Or perhaps the volume of bad actors is even higher than I imagine and Google is being forced to do this just to keep the lights on for the API at all. But the fact of the matter is that they are offloading the cost of whatever compliance they need onto their platform users, the people who are spending time and effort to improve the Google ecosystem. It feels petty and short-sighted but I suppose that Google has shifted into an extraction phase on behalf of their investors. We'll probably see a lot more of this kind of nickel and diming from them.
This is straight out of the IBM playbook. Did Google pick up some IBM flunkies recently?
What a terrible business practice. This was a company that once proudly displayed the motto, “don’t be evil” and even proved itself in various situations. Those days are long gone as the company is filled with more brain dead, unimaginative MBA flunkies.
I mean for Apple / Android / Windows? app store reviews you often don't get much choice (not until EU laws are fully complied with anyway), as I've found out the hard way over the years developing apps.
> You may have seen iA Writer’s announcement that they are stopping development of their Android version for similar reasons. Our experience was different, but our circumstances are similar. While Google Drive may not be the most popular connection option in Transmit, we know many users rely on it, and we often use it here at Panic to send and receive files from the game developers we work with.