Instagram announced on Thursday that it would no longer allow graphic images of self-harm, such as cutting, on its platform. The change appears to be in response to public attention to how the social network might have influenced a 14-year-old’s suicide.

In a statement explaining the change, Adam Mosseri, the head of Instagram, made a distinction between graphic images about self-harm and nongraphic images, such as photos of healed scars. Those types of images will still be allowed, but Instagram will make them more difficult to find by excluding them from search results, hashtags and recommended content.

Facebook, which acquired Instagram in 2012 and is applying the changes to its own site, suggested in a separate statement that the changes were in direct response to the story of Molly Russell, a British teenager who killed herself in 2017.

Molly’s father, Ian Russell, has said publicly in recent weeks that he believes that content on Instagram related to self-harm, depression and suicide contributed to his daughter’s death.

The changes will “take some time” to put in place, he added.

Daniel J. Reidenberg, the executive director of the suicide prevention group Save.org, said that he helped advise Facebook’s decision over the past week or so and that he applauded the company for taking the problem seriously.

Mr. Reidenberg said that because the company was now making a nuanced distinction between graphic and nongraphic content, there would need to be plenty of moderation around what sort of image crosses the line. Because the topic is so sensitive, artificial intelligence probably will not suffice, Mr. Reidenberg said.

“You might have someone who has 150 scars that are healed up — it still gets to be pretty graphic,” he said in an interview. “This is all going to take humans.”

In Instagram’s statement, Mr. Mosseri said the site would continue to consult experts on other strategies for minimizing the potentially harmful effects of such content, including the use of a “sensitivity screen” that would blur nongraphic images related to self-harm.

He said Instagram was also exploring ways to direct users who are searching for and posting about self-harm to organizations that can provide help.

This is not the first time Facebook has had to grapple with how to handle threats of suicide on its site. In early 2017, several people live-streamed their suicides on Facebook, prompting the social network to ramp up its suicide prevention program. More recently, Facebook has utilized algorithms and user reports to flag possible suicide threats to local police agencies.

April C. Foreman, a psychologist and a member of the American Association of Suicidology’s board, said in an interview that there was not a large body of research indicating that barring graphic images of self-harm would be effective in alleviating suicide risk.