Starting on February 28th and until March 6th, Facebook was no longer able to extract the Open Graphs (previews) of lots of social posts containing short links, deep links, or direct links to websites.
A bunch of websites and platforms have been affected by this issue (more details here and here), which originated within Facebook itself. Among them, also JotURL has been affected, even if limited to deep links based only on the jo.my and t.jo.my domains - no custom domains have been affected by this issue.
The problem was caused by the fact that FB suddenly changed the behavior of its crawler (without sending a notification in advance). This way a bug was introduced, which returned a 403 error for the preview of lots of FB posts, thus not properly loading the preview itself (even if the JotUrl link was still redirecting to the destination correctly).
The problem has now been fixed by FB, but we are interested in informing you to explain wh at to do in case the problem occurs again in the future (even if it should not).
1) Contact your trusted FB contact to report the problem.
2) Report the problem to the Facebook Developer Community (signing in with your FB account is required) - and upvote other reports already open on the same issue.
3) Try to use a custom branded domain instead of the jo.my domain, or try to use a different custom domain, in order to understand whether the problem is linked only to a specific domain or not.
4) Inform us that you’ve gone through one or more of the previous points :)
How did JotURL handle the resolution of the issue?
As soon as we started to notice irregularities, our team immediately launched internal investigations to exclude any possibility that the bug had arisen due to errors in our service.
Initially we carried out a check on the behavior of our system, since JotURL never returns a 403 error in any case, therefore it was not possible that it was directly due to the logic of our engine - we however took into consideration (for the avoidance of doubt) that the issue could be caused by factors related to our service and we decided to investigate further. It certainly wasn't the source's fault, since even by using other sources outside of Amazon / Target / Walmart / etc. the bug still showed up.
We also excluded that it was due to the Easy Deep Linking (EDL) system in JotURL, as the error occurred even if the EDL was not active - this indicated that our system always carried out the Deep Linking operation correctly even when the preview was not recognized / caught by the crawler of FB.
At this point we further investigated the security rules of our system, and even by carrying out more in-depth tests it was found that none of the rules prevented the crawler from requesting our servers. Our system responded correctly, but with the debugging tools provided by FB there were no changes and the error persisted.
We made some changes to the robots.txt file (removing it, or explicitly adding the FB crawler) to prevent the crawler from being blocked originally, but this too did not change the outcome.
At this point, we carried out some further operations checking that the HTTPS protocol did not influence the operation of the crawler, and so it was (aka it did not influence).
Even if, from the beginning, we noticed that the impact was limited to the jo.my domain, we still decided to carry out further tests with custom domains, which turned out to be not affected by the bug and always responded correctly.
Thereafter, we carried out further tests by hosting static pages externally to our structure, on the same domain, and even in that case the crawler sent a 403 error.
At this point, we confirmed that the problem was beyond our capabilities and that it was relatively a behavior linked to the crawler or - alternatively - a potential soft / shadow ban imposed by Facebook itself (but still without any reason).
Meanwhile, from the beginning, we have exploited the official channel available to developers which is the Facebook Developer Community, which following our report began to collect dozens of similar reports.
The response was slow in arriving, but they found a problem in the crawler, blaming the configuration of the robots.txt of all users affected by the problem, explaining that some configurations returned errors even if - as already explained - we (and lots of other developers of different companies) had carried out tests on the matter which denied this hypothesis.
The problem was then “suddenly” solved, for everyone, on the Facebook side.
We expect this issue to not occur again in the future but, of course, since the latter originated exclusively on Facebook's side, we cannot assure this, therefore we think it is important to inform you.
Our commitment remains to communicate this type of problems openly and transparently, as we are doing for this communication.
We hope that Facebook can do the same in the future, as it accepted our request to specify more clearly the reasons for any errors reported in the sharing debugger (in fact this behavior changed after JotURL’s request).
If you have any doubts regarding this topic, please contact us at our HelpDesk - we are always at your disposal.
Cheers,
JotURL Team
Comments
0 comments
Please sign in to leave a comment.