Quantcast
Channel: Media Foundation Development for Windows Desktop forum
Viewing all 1079 articles
Browse latest View live

IMFMediaEngineClassFactoryEx::CreateMediaSourceExtension flags and attributes


IMFMediaEngineEx::GetResourceCharacteristics flags

$
0
0

The docs for IMFMediaEngineEx::GetResourceCharacteristics are confusing.  Does it return a "bitwise OR of zero or more flags"?  Or discrete values?  For example, does 3 mean "live" + "seekable"?  Or "pauseable"?  Or does pauseable imply live + seekable?

It looks like a design bug.

And while you are at it, how about creating an enum for the values?

How to distinguish between multiple joysticks?

$
0
0

Hi,

How to distinguish between multiple joysticks?

I'm using 2 (the same) joysticks in my software, but sometimes (after a computer restart) joystick API report the same joystick (on the same USB port) with different ID, e.g.:

- Left joystick has ID 0 and right joystick has ID 1. After 5 restarts left joystick has ID 1 and right joystick has ID 0.

I was trying to bind joystick to USB port (USB port list based on USBView sample), but joystick API doesn't give any information that can be "linked" with device information available on USB port.

I can set "Preferred device" in joy.cpl advanced settings, but don't know how to get information which joystick is preffered.

PS I'm using mingw x64 4.9.2 posix.

IMFTransform::GetInputStreamInfo & MFT_INPUT_STREAM_PROCESSES_IN_PLACE

$
0
0
In an effort to understand MFTs better, I decided to try changing the mft_grayscale sample to support MFT_INPUT_STREAM_PROCESSES_IN_PLACE (as described here).  While Grayscale doesn't require a great deal of CPU power, copying buffers between two IMFSamples seems unnecessarily inefficient.

I made what I thought were the appropriate changes, but I found that it didn't work.  The problem seems to be that MF never actually calls GetInputStreamInfo (which is where MFT_INPUT_STREAM_PROCESSES_IN_PLACE gets set).  Instead it skips straight to GetOutputStreamInfo, where adding the MFT_OUTPUT_STREAM_PROVIDES_SAMPLES flag (which is what the docs tell you to do for 'in place') causes MF to give an 0x80070057 (The parameter is incorrect).

Looking for working implementations, I found that the mft_audiodelay sample sets MFT_INPUT_STREAM_PROCESSES_IN_PLACE, but its GetInputStreamInfo is not getting called either.

Is there some trick here I missed?  Or does MF never really use PROCESSES_IN_PLACE?

Tested with w7 and 8.1, using MFPlayer2 sample to load the MFTs.

Writing MFTs for MediaFoundation

$
0
0
I recently discovered that there are (at least) two different sets of code inside MF used to drive MFTs.  People creating general-purpose MFTs should be aware of this fact so that their MFTs can be tested under both.

My question is: Are there only 2?

I realize that anyone can write their own code to drive MFTs, but at a minimum, MFTs should be expected to work with the standard MS+MF implementations.  But this is difficult to do if you don't know where they all are.

I see that the IMFMediaEngineEx interface has InsertVideoEffect.  Does this use a third driver?  Please tell me IMFMediaEngineEx doesn't use the (flawed) code from IMFPMediaPlayer...


How to capture raw format image using media foundation?

$
0
0

Hello,

I am new to MediaFoundation's video capture API. But I have an app that performs a video capture preview of a webcam. I picked up most of the ideas from this sample code 

CaptureEngine video capture sample

I am facing problem in saving the video buffer as raw format from Video stream. Iam trying this following way.

Using IMFCapturePhotoSink, I captured one frame from video stream.There is no raw container support in 
capture engine.So,i registered the callback using IMFCapturePhotoSink->SetSampleCallback() to retrieve 
buffer from video source.

After registered the callback,IMFCaptureEngineOnSampleCallback::OnSample() method receives a sample.
OnSample method is not called when MJPG video streaming. But I received a sample when streaming in YUY2 format. Below code for your reference.

HRESULT CTakePhoto()
{
    IMFCaptureSink *pSink = NULL;
    IMFCapturePhotoSink *pPhoto = NULL;
    IMFCaptureSource *pSource;
    IMFMediaType *pMediaType = 0;

    // Get a pointer to the photo sink.
    HRESULT hr = m_pEngine->GetSink(MF_CAPTURE_ENGINE_SINK_TYPE_PHOTO, &pSink);
    if (FAILED(hr))
    {
        goto done;
    }

    hr = pSink->QueryInterface(IID_PPV_ARGS(&pPhoto));
    if (FAILED(hr))
    {
        goto done;
    }

    hr = m_pEngine->GetSource(&pSource);
    if (FAILED(hr))
    {
		PrintDebug(L"GetSource : GetLastError() = %d\r\n",GetLastError());
        goto done;
    }

	hr = pSource->GetCurrentDeviceMediaType(1, &pMediaType);     		// 1 is image stream index
    if (FAILED(hr))
    {
        goto done;
    }

    hr = pPhoto->RemoveAllStreams();
    if (FAILED(hr))
    {
		PrintDebug(L"RemoveAllStreams failed 0x%x %d\r\n",hr,GetLastError());
        goto done;
    }

    DWORD dwSinkStreamIndex;
	// Try to connect the first still image stream to the photo sink
	hr = pPhoto->AddStream((DWORD)MF_CAPTURE_ENGINE_PREFERRED_SOURCE_STREAM_FOR_PHOTO,  pMediaType,NULL, &dwSinkStreamIndex);
	if(FAILED(hr))
	{
		goto done;
	}

	//Register the callback
	hr = pPhoto->SetSampleCallback(&SampleCb);
	if (FAILED(hr))
	{
		goto done;
	}

    hr = m_pEngine->TakePhoto();
    if (FAILED(hr))
    {
        goto done;
    }

done:
    SafeRelease(&pSink);
    SafeRelease(&pPhoto);
    SafeRelease(&pSource);
    SafeRelease(&pMediaType);
    return hr;
}

//Its a class
STDMETHODIMP CSampleCallback::OnSample(_In_ IMFSample *pSample)
{
	if(pSample != NULL)
	{

		hr = pSample->ConvertToContiguousBuffer(&pMediaBuffer);
		if(FAILED(hr))
		{
			OutputDebugString(L"ConvertToContiguousBuffer failed\r\n");
			return S_FALSE;
		}

		hr = pMediaBuffer->Lock(&pBuffer,0,&dwBufLen);
		if(FAILED(hr))
		{
			OutputDebugString(L"Lock failed\r\n");
			return S_FALSE;
		}

		hr = pMediaBuffer->Unlock();
		if(FAILED(hr))
		{
			OutputDebugString(L"Unlock failed\r\n");
			return S_FALSE;
		}

		SafeRelease(&pMediaBuffer);
	}
	return S_OK;
}

Why cant i receive a sample when MJPG video streaming?Why this behaviour occurs?
Did i miss anything to configure to receive a sample?Can you help me to sort out this problem?

Thanks in advance.

Regards,

Ambika


MF doesn't play h264 video from my source

$
0
0

I’m developing a media foundation-based h264 player.

I open my URL.

The framework creates and initializes my media source, which in turn creates and initializes my 2 streams, audio and video.

Then it asks for video samples until the end of the file is reached. The log is filled with CMFTransformDetours::ProcessOutput failed hr=0xC00D6D72 MF_E_TRANSFORM_NEED_MORE_INPUT

Then after my video stream sends MEEndOfStream, the framework asks for a few more audio samples, finally transitions the state to playing, and starts to play audio only.

What does the framework tries to find in my video stream that isn’t there?

The same file plays OK by the same player code if opened by the built-in stream source. mftrace.exe says when the built-in stream source plays the file, the first video sample is 38 bytes longer then when my stream source plays it (all other samples are exactly the same length).

38 bytes is exactly the size of my MF_MT_MPEG_SEQUENCE_HEADER value for the video stream (i.e. 00 00 01 + SPS + 00 00 01 + PPS). I’ve tried to prepend MF_MT_MPEG_SEQUENCE_HEADER value to my first sample, didn’t help.

System-provided stream source sets some undocumented attribute on video samples, GUID = {19124E7C-AD4B-465F-BB18-20186287B6AF}, values are 8-bytes binary values like “09 00 00 00 29 0d 00 00”, different on each frame — what’s that and can it be the reason?

What else can I try?

Is there any documentation on what exactly does the MF h264 decoder wants on input?

Thanks in advance.

How to find the performance bottleneck when playing a 4K video?

$
0
0

I am running into frames being dropped from time to time when playing a 4K video with Media Foundation.
The test movie I am using has an average bitrate of 20 mbps and a peak of 37 mbps and is h264 encoded.
GPU is NVidia GeForce GTX970. Rendering itself does not seem to be the problem since lowering to resolution to Full HD still shows the framedrops.

What are the easiest steps to find the performance bottleneck that causes the frames to be dropped?

I just created a github repository with a SharpDX (a dotnet wrapper around DirectX) based application that plays a video to a fullscreen quad with a custom shader effect as an example for testing. 

https://github.com/rolandsmeenk/SharpDXVideoPlayer

Roland


Problem with seek operation in MF Media Source

$
0
0

I am using TopoEdit for playing audio stream of local file. I have specified "(MFMEDIASOURCE_CAN_PAUSE | MFMEDIASOURCE_CAN_SEEK)" in IMFMediaSource::GetCharacteristics and MFBYTESTREAM_IS_SEEKABLE of that audio stream is true and I haven't used IMFByteStream for reading the audio samples.

I can able to do play, pause and stop operations.But on seeking, IMFMediaSource::start is not getting called. Even I have referred Wav source(mentioned it has seek support) in SDK samples but on executing wav source it is also not seeking. I don know the reason. please help me to find a solution to this issue. 

Thanks in advance,

Deepak Ranganathan

serious bug in IDXGISwapChain1::Present1()

$
0
0

Depending on the situation, the "SyncInterval" parameter in the Present1() call has a different meaning:

https://msdn.microsoft.com/en-us/library/windows/desktop/hh446797%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396

a) bitblt model: "Synchronize presentation for at least n vertical blanks".
b) Flip model: "Synchronize presentation after the nth vertical blank".

This difference alone would be annoying but not a big problem, if it were clearly defined in which situation which interpretation were used. Unfortunately in real life DXGI switches between different interpretations in unexpected ways.

Practically, in Windows 8.1 x64, I experience the following:


1) In simple windowed mode, when using the flip model, as long as my rendering window is only covering a part of the screen, everything behaves as expected. SyncInterval is interpreted as b).

2) If I switch this flip model swap chain into fullscreen exclusive mode, suddenly the SyncInterval is interpreted as a), even though I haven't changed the model (still flip). However, if I try to create a swap chain directly in fullscreen exclusive mode with the flip model, DXGI reports an error. So it seems that in fullscreen exclusive mode the flip model is not supported. So my best guess is that if I switch a flip model swap chain into fullscreen exclusive mode, it silently changes to DISCARD. Which is somewhat unexepcted, but ok.

3) Here comes the *real* problem: If my flip model rendering window covers the whole screen (including the taskbar), the SyncInterval is suddenly interpreted as a). If I right click on the rendering window to open a context menu, SyncInterval is interpreted as b). If the context menu disappears, SyncInterval is interpreted as a). If I open a settings window which covers part of the rendering window, SyncInterval is interpreted as b). If I close that settings window, SyncInterval is interpreted as a). So basically, based on the size and position of the rendering window, and based on whether there's any other window covering the rendering window or not, DXGI switches back and forth between different SyncInterval interpretations all the time, without giving me any notice about it. Please note that we're *not* talking about fullscreen exclusive mode here. I know for a fact that I'm still in windowed mode all the time. How am I supposed to calculate proper SyncInterval values if DXGI changes the meaning of this parameter all the time behind my back??

I'm aware of that 99.9% of all applications never use any SyncInterval value other than 0 or 1. And with those values there's no difference. But I want to use values higher than 1 in some situations. And then suddenly this DXGI behaviour becomes a major problem.

(P.S: You may wonder why I want to use values higher than 1. The reason is that I'm writing a video renderer, which wants to render and present several frames in advance. Now if I render a 24fps movie on a 60Hz screen, I can only present several frames in advance by either presenting the same frame multiple times to cover each VSync. Or by using a SyncInterval value bigger than 1. The latter saves GPU performance, so that's why I want to use it.)

(P.P.S: You may wonder how I know which SyncInterval interpretation is active at any given time: Easy, by analyzing the "IDXGISwapChain::GetFrameStatistics()" information).

MFTs & MFT_MESSAGE_COMMAND_TICK and MFT_MESSAGE_DROP_SAMPLES

$
0
0
While the docs do a very good job of describing how & when most messages are sent and how MFTs are expected to respond, the docs for MFT_MESSAGE_COMMAND_TICK and MFT_MESSAGE_DROP_SAMPLES are noticeably lacking.

Other than saying they require Windows 7 or Windows 8, there aren't many clues.

At a guess, I suspect MFT_MESSAGE_COMMAND_TICK is related to IMFSinkWriter::SendStreamTick.  And from the use of 'COMMAND' in the name, I'm guessing this is not just a notification that a tick was generated, but rather a command to produce one.  What does the message parameter contain?  It can't be the Timestamp since it wouldn't fit.

If I'm right about what this is, I don't know how practical it is to expect all MFTs to support it.  ProcessMessage says that if an MFT doesn't support a specific message it should just return S_OK.  But since this is what I would expect an MFT to return if it DOES support it, does this seem right?  Maybe we are supposed to return S_FALSE if we don't support it?

I'm really stumped by MFT_MESSAGE_DROP_SAMPLES.  The name doesn't contain either COMMAND or NOTIFY, so I'm not surewhat it might be trying to tell me.  Maybe something to do with MF_QUALITY_DROP_MODE?  What is its parameter and what should I do if I get one?

One last point:  This page says "Some messages require specific actions from the MFT. These events have "MESSAGE" in the message name."  But *ALL* messages have "MESSAGE" in the name.  Was this supposed to say "COMMAND"?  And while you are fixing that, you should either call them 'messages' or 'events,'  not both.  Maybe something like "Messages that have "COMMAND" in their name require specific actions from the MFT."

Problems with IMFSampleGrabberSinkCallback and MP4 files

$
0
0

I am implementing a IMFSampleGrabberSinkCallback based class to capture and preview video frames from video files.

The IMFMediaType passed in to MFCreateSampleGrabberSinkActivate() specifies MFVideoFormat_RGB32, so I am configuring

IMFSampleGrabberSinkCallback::::OnProcessSample() to receive data in uncompressed RGB32 format.

And everything almost works.

I have been able to successfully playback .wmv files at resolutions of (640, 480) and (1920, 1080).

I have also succeeded in playing back an .mp4 file of resolution (1280, 720).

However the .mp4 files of resolution (1920, 1080) do not work. No errors are reported by any of the MF system calls. But the IMFSampleGrabberSinkCallback::::OnProcessSample() function is simply never called for these files.

MFRequireProtectedEnvironment() returns S_FALSE, so I do not think that is an issue.

Does anyone know what might be happening here? Can anyone point me to IMFSampleGrabberSinkCallback sample code that can reliably process 1080 hd mp4 files?

how to configure encoder settings in SinkWriter

$
0
0

Hi,

I'm new to Media Foundation development and I have some questions in using SinkWriter.

I already know that I can use SinkWriter to write in to a file on disk like described here https://msdn.microsoft.com/en-us/library/windows/desktop/ff819477(v=vs.85).aspx

I just need to call  MFCreateSinkWriterFromURL, then set mediatype like framerate, resolution and pixel format, etc. And finally call WriteSample to write frames.

What I want to do is to generate a MP4 file with H.264 video.  But I also want to have more controls on H.264 encoder settings, like quality factor, buffer size, etc which I think is only available on H.264 MFT.

So I just want to know, how could I use the H.264 MFT together with SinkWriter.

Thanks


Access a hidden file programatically in Windows RT?

$
0
0

Is there any way to access hidden files programatically in Windows RT?

Whenever I try to do GetFileAsync for a hidden file, I get UnauthorizedAccessException.

Storing variables when windows app is closed and accessing them later

$
0
0

Is there any way by which I can store some values in any variables and access them when windows app is started next time.

I tried storing values in text file and later accessing them but due to some file access issue I need to store these values in some kind of variable.


Capture image from still pin using media session technique in media foundation

$
0
0

Hi,

I am beginner for media foundation.I'm using media session topology technique to show preview from USB Camera device.

I built the topology to show preview and capture video,both are working successfully

I am facing problem in saving image from image stream(still pin). I'm trying this following way.

  1. Shut down the topology and session.
  2. Created new topology and session.
  3. Activated the media source.
  4. Deselected the video stream(capture pin) and selected the image stream(still pin).Added source node to topology.
  5. Added sample grabber callback sink output node(Save a frame in a file)
  6. Set the topology
  7. Session raised MF_TOPOSTATUS_READY event. Session started to play.
  8. After play started,Media session event is giving HRESULT:0X80070057(E_INVALIDARG).

I am able to get and set the still pin resolution and format.

Why i cant build the topology to grab a frame. Why this hresult is occurring for me? Am i building the topology correctly or missing something?

I referred sample code from internet,most of them are used to take photo from video stream,not from the image stream.

Please guide me,how to build the topology to capture image from still pin?

Provide me a sample code.It's more helpful for me to solve this issue.

Thanks in advance.

Regards,

Ambika




Rotation and Distortion issue when I rotate device from landscape mode to Portait mode for Live preview window

$
0
0

I have camera application with container size 640x480, when I keep Device in Landscape mode, I have perfect Image (live preview), when I rotate device 90 / 270 Degrees into Portrait mode with live preview window, the image gets distorted and also Image not rotating , I have Routines RotateImage90() and RotateImage270(), these Two routines I did use TranslateTransform() and ScaleTransform() function and use DrawImage() to draw the image.

my issue is: when I rotate device from Landscape mode to Portrait mode, image not rotating, also some distortion introduced, the below Two routines are the one to rotate clock-wise and anti-clock-wise,I have perfect image 640x480 in Landscape mode (container size is same as image size), whereas with Portrait mode, I have image is much smaller (distortion) , also Image not rotating. can I use these for live preview window? appreciated.

//clockwise Portrait mode
 void OrientationTransform::RotateImage90(BYTE* data, int dataLength) {
               // Pointer to a image buffer copy for manipulation

                BYTE *dataTemp;

                dataTemp = new BYTE[dataLength];
               memcpy(dataTemp, data, dataLength);

                BITMAPINFOHEADER bih = m_videoInfo.bmiHeader;
               Bitmap bmp2(bih.biWidth, bih.biHeight, m_stride, m_pixFmt, dataTemp);
               Bitmap bmp(bih.biWidth, bih.biHeight, m_stride, m_pixFmt, data);

                Graphics g(&bmp);
               g.Clear(Color::Black);

                g.TranslateTransform(-(float)bmp2.GetWidth() / 2,
 -(float)bmp2.GetHeight() / 2);
              
 
                g.ScaleTransform(bmp2.GetWidth() / 2 , bmp2.GetHeight() / 2 );  
 
                g.RotateTransform(90);
               g.TranslateTransform((float)bmp2.GetWidth() / 2, (float)bmp2.GetHeight() / 2);
               g.DrawImage(&bmp2, 0, 0);

                delete dataTemp;
              
 }

 void OrientationTransform::RotateImage270(BYTE* data, int dataLength)
 {
               // Pointer to a image buffer copy for manipulation
               BYTE *dataTemp;

                dataTemp = new BYTE[dataLength];
               memcpy(dataTemp, data, dataLength);

                BITMAPINFOHEADER bih = m_videoInfo.bmiHeader;
               Bitmap bmp2(bih.biWidth, bih.biHeight, m_stride, m_pixFmt, dataTemp);
               Bitmap bmp(bih.biWidth, bih.biHeight, m_stride, m_pixFmt, data);

                Graphics g(&bmp);
               g.Clear(Color::Black);

                g.TranslateTransform(-(float)bmp2.GetWidth() / 2,
 -(float)bmp2.GetHeight() / 2);

                g.ScaleTransform(bmp2.GetWidth() / 2 , bmp2.GetHeight() / 2 );  
 
 
                g.RotateTransform(270);
               g.TranslateTransform((float)bmp2.GetWidth() / 2, (float)bmp2.GetHeight() / 2);
               g.DrawImage(&bmp2, 0, 0);

                delete dataTemp;
}

Surface Pro 3, Front and rear cameras: MFCaptureToFile does not work

$
0
0

I tested MFCaptureToFile on Surface Pro 3 with front and rear cameras. They don't work. 

https://msdn.microsoft.com/en-us/library/windows/desktop/ee663604%28v=vs.85%29.aspx

External cameras, as expected work fine. Any clues?

Virtual Cable and output

$
0
0

Hello All,

We need Virtual Cable for the following requirement :

Here is the complete explanation with drawing :





We need to dynamically switch from case 1 to case 2 programmatically, please help in this using C/C++ Win32 API.

 

We need to know how to connect the virtual cable to the corresponding outputs programmatically 

--- Misbah


Senior Design Engineer T.E.S Electroni Solutions (Bangalore-India) www.tes-dst.com email-misbah.khan@tes-dst.com

Source Resolver CreateObjectFromUrl hangs when called from ByteStreamHandler

$
0
0

Hi

I have a  MediaFoundation Bytestreamhandler for a specific extension and for a specific case  I have to create a mediasource from a modified URL within the Bytestreamhandler.  I am trying to do this by calling pSourceResolver->CreateObjectFromUrl. The problem is this function hangs when called within BytestreamHandler. It also hangs for pSourceResolver->CreateObjectFromByteStream. (no error returned, just hangs).

I was doubtful whether I am making a mistake with the options but when I try pSourceResolver->CreateObjectFromUrl from my own SchemeHandler, it gives me a valid mediasource which works.

I didn’t see any documentation about this but is it that we can’t call SourceResolver from BytestreamHandler? I get that this would be like a recursive call but there should be someway to work around this Or is this a bug? Or Should I be using some other function to be getting a valid mediasource for a Valid URL when I need this to be called from ByteStreamHandler?

Thanks,

Kaks


Viewing all 1079 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>